[Openstack] Openstack and xen issues.
Alvin Starr
alvin at netvel.net
Sat Nov 23 22:25:29 UTC 2013
I am still getting the volume_id missing.
I could see the openstack created images using libvirt and rbd but I did
not see them from xe using vdi-list.
So just to try another method I mounted the images in
/usr/share/xapi/images/ and I still get the same error.
Pulling out my handy dandy wireshark I could see a sr.get_record request
and a return with the same number of images as in the RBD pool.
On 11/22/2013 11:29 AM, Bob Ball wrote:
>
> The volume_id missing from the connection_details is highly
> suspicious, yes. I've not seen that before, and don't know what could
> cause it.
>
> Hopefully someone else on the list will be able to assist. If not, I
> may be able to have another look on Monday.
>
> I'm not sure why we didn't use rdb-fuse -- that's a question best
> asked on the xs-devel mailing list.
>
> Bob
>
> *From:*Alvin Starr [mailto:alvin at netvel.net]
> *Sent:* 22 November 2013 15:40
> *To:* Bob Ball; openstack at lists.openstack.org
> *Subject:* Re: [Openstack] Openstack and xen issues.
>
> Doh.
> Now I feel stupid.
>
> It is getting much farther
>
> Now I am seeing the following but I expect it may be because of all
> the other stuff I broke trying to figure out my previous problem.
>
>
> Error: 'volume_id'
> Traceback (most recent call last):
> File "/opt/stack/nova/nova/compute/manager.py", line 1030, in
> _build_instance
> set_access_ip=set_access_ip)
> File "/opt/stack/nova/nova/compute/manager.py", line 1439, in _spawn
> LOG.exception(_('Instance failed to spawn'), instance=instance)
> File "/opt/stack/nova/nova/compute/manager.py", line 1436, in _spawn
> block_device_info)
> File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 219, in spawn
> admin_password, network_info, block_device_info)
> File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 351, in spawn
> network_info, block_device_info, name_label, rescue)
> File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 499, in _spawn
> undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
> File "/opt/stack/nova/nova/utils.py", line 823, in rollback_and_reraise
> self._rollback()
> File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 477, in _spawn
> name_label)
> File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 139, in inner
> rv = f(*args, **kwargs)
> File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 339, in
> create_disks_step
> disk_image_type, block_device_info=block_device_info)
> File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 535, in
> get_vdis_for_instance
> vdi_uuid = get_vdi_uuid_for_volume(session, connection_data)
> File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 488, in
> get_vdi_uuid_for_volume
> sr_uuid, label, sr_params =
> volume_utils.parse_sr_info(connection_data)
> File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 213,
> in parse_sr_info
> params = parse_volume_info(connection_data)
> File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 232,
> in parse_volume_info
> volume_id = connection_data['volume_id']
> KeyError: 'volume_id'
>
>
> A partly off topic question.
> Why not use rdb-fuse and mount the ceph blobs as files instead of
> going through libvirt?
>
> On 11/22/2013 09:41 AM, Bob Ball wrote:
>
> Could you provide the full error log with nova crashing?
>
> Thanks,
>
> Bob
>
> ------------------------------------------------------------------------
>
> *From:*Alvin Starr [alvin at netvel.net <mailto:alvin at netvel.net>]
> *Sent:* 22 November 2013 14:31
> *To:* Bob Ball; openstack at lists.openstack.org
> <mailto:openstack at lists.openstack.org>
> *Subject:* Re: [Openstack] Openstack and xen issues.
>
> I have put Openstack on a separate machine to try and separate and
> isolate the various components I need to work with in the
> interests of making my debugging easier.
> This in retrospect may not have been the best idea.
>
> I have had a very long history with xen and that may be more of an
> impediment because I think I know things about it that are no
> longer true.
>
> I am using the default devstack scripts as of a few weeks ago so
> it should be grabbing the latest version of Openstack or at least
> that is my belief.
>
> Here is my sr-param-list.
>
> uuid ( RO) : 7d56f548-174b-d42b-12f2-e0849588e503
> name-label ( RW): Ceph Storage
> name-description ( RW):
> host ( RO): localhost
> allowed-operations (SRO): unplug; plug; PBD.create;
> PBD.destroy; VDI.clone; scan; VDI.create; VDI.destroy
> current-operations (SRO):
> VDIs (SRO):
> PBDs (SRO): 40dd29a3-154a-e841-ce52-4547c817d856
> virtual-allocation ( RO): 348064577384
> physical-utilisation ( RO): 342363992064
> physical-size ( RO): 18986006446080
> type ( RO): libvirt
> content-type ( RO):
> shared ( RW): true
> introduced-by ( RO): <not in database>
> other-config (MRW): ceph_sr: true
> sm-config (MRO):
> blobs ( RO):
> local-cache-enabled ( RO): false
> tags (SRW):
>
>
> I started tracing the xenapi transactions over the network and
> could see the pool.get_all and pool.get_default when the sr_filter
> was not set but once I set it nova would crash complaining about
> no repository.
> I checked the TCP transactions and did not see any SR.get_all
> while some debugging prints assured me that the code was being
> exercised.
>
>
>
> On 11/22/2013 04:40 AM, Bob Ball wrote:
>
> Hi Alvin,
>
> Yes, we typically do expect Nova to be running in a DomU.
> It's worth checking out
> http://docs.openstack.org/trunk/openstack-compute/install/yum/content/introduction-to-xen.html
> just to make sure you've got everything covered there.
>
> I say typically because in some configurations (notably using
> xenserver-core) it may be possible to run Nova in dom0 by
> setting the connection URL to "unix://local". This is an
> experimental configuration and was added near the end of
> Havana -- see
> https://blueprints.launchpad.net/nova/+spec/xenserver-core.
>
> In terms of sr_matching_filter, check that you're setting it
> in the right group. If you're using the latest builds of
> Icehouse then it should be in the xenserver group. I'm also
> assuming that the other-config for the SR does indeed contain
> ceph-sr=true?
>
> Is the SR that is used for VMs still the default-SR?
>
> Thanks,
>
> Bob
>
> *From:*Alvin Starr [mailto:alvin at netvel.net]
> *Sent:* 22 November 2013 01:32
> *To:* openstack at lists.openstack.org
> <mailto:openstack at lists.openstack.org>
> *Subject:* [Openstack] Openstack and xen issues.
>
>
> I am trying to use xen with Ceph and openstack using the
> devstack package.
> I am slowly wacking my way through things and have noticed a
> few issues.
>
> 1. openstack expects to be running in a domU and generates
> error messages even if xenapi_check_host is false. I am
> not sure if this causes other side effects. The tests for
> the local dom0 should be completley bypassed if the check
> is disabled.
> 2. Open stack tries to read the xen SRs and checks the
> default one which ends up being the xen local storage and
> not any other SR. If I set the sr_matching_filter =
> other-config:ceph-sr=true there should be a xapi
> SR.get_all request generated but it looks like it is not
> generated at all. I have tracked the http traffic and no
> out put is generated even though the approprate code is
> being called.
>
> --
>
> Alvin Starr || voice: (905)513-7688
>
> Netvel Inc. || Cell: (416)806-0133
>
> alvin at netvel.net <mailto:alvin at netvel.net> ||
>
>
>
>
> --
>
> Alvin Starr || voice: (905)513-7688
>
> Netvel Inc. || Cell: (416)806-0133
>
> alvin at netvel.net <mailto:alvin at netvel.net> ||
>
>
>
>
> --
> Alvin Starr || voice: (905)513-7688
> Netvel Inc. || Cell: (416)806-0133
> alvin at netvel.net <mailto:alvin at netvel.net> ||
--
Alvin Starr.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131123/6ac4bb40/attachment.html>
More information about the Openstack
mailing list