[Openstack-operators] Openstack failing to find repository over XCP

Leandro Reox leandro.reox at gmail.com
Thu May 5 13:25:23 UTC 2011


Thanks for the quick reply Todd but not, that its not the problem. Anyonelse
to help me out with the trace i attached ?

Best regards
Lele

On Wed, May 4, 2011 at 9:52 PM, Todd Deshane <todd.deshane at xen.org> wrote:

> On Wed, May 4, 2011 at 7:34 PM, Leandro Reox <leandro.reox at gmail.com>
> wrote:
> > Here is the SMlog content, and after im attaching the complete Python
> stack
> > trace of the failing operation :
> >
> > [21522] 2011-05-02 11:05:23.165168 ['uuidgen', '-r']
> > [21522] 2011-05-02 11:05:23.173426 SUCCESS
> > [21522] 2011-05-02 11:05:23.181877 lock: acquired
> > /var/lock/sm/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/sr
> > [21522] 2011-05-02 11:05:23.191170 vdi_create {'sr_uuid':
> > 'f6c3ee92-1ee8-9250-6288-bfd82b18eaa2', 'subtask_of':
> > 'OpaqueRef:e0c12109-a670-7467-7b30-d7fa98c266e4', 'args': ['5368709120',
> > ''], 'host_ref': 'OpaqueRef:96666e1c-b5c2-456c-ce42-d5de48c2c72f',
> > 'session_ref': 'OpaqueRef:4684ee37-2dbf-2918-0e8e-604eb2d57756',
> > 'device_config': {'SRmaster': 'true', 'serverpath': '/vol/xcp', 'server':
> > '172.16.129.11'}, 'command': 'vdi_create', 'sr_ref':
> > 'OpaqueRef:625b2d6a-de22-047d-8d1d-4217d301c7fd', 'vdi_sm_config':
> > {'vmhint': '8141c6e2-ba99-4378-4648-12a8c880a74c'}}
> > [21522] 2011-05-02 11:05:23.191715 ['/usr/sbin/td-util', 'create', 'vhd',
> > '5120',
> >
> '/var/run/sr-mount/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/bfc16f29-fcae-4526-a33c-01a4b08e9e12.vhd']
> > [21522] 2011-05-02 11:05:23.223136 SUCCESS
> > [21522] 2011-05-02 11:05:23.223258 ['/usr/sbin/td-util', 'query', 'vhd',
> > '-v',
> >
> '/var/run/sr-mount/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/bfc16f29-fcae-4526-a33c-01a4b08e9e12.vhd']
> > [21522] 2011-05-02 11:05:23.233492 SUCCESS
> > [21522] 2011-05-02 11:05:23.288800 lock: released
> > /var/lock/sm/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/sr
> > [21522] 2011-05-02 11:05:23.292891 lock: closed
> > /var/lock/sm/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/sr
> > [21897] 2011-05-02 11:11:31.255700 ['uuidgen', '-r']
> > [21897] 2011-05-02 11:11:31.263973 SUCCESS
> > [21897] 2011-05-02 11:11:31.272166 lock: acquired
> > /var/lock/sm/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/sr
> > [21897] 2011-05-02 11:11:31.274116 vdi_create {'sr_uuid':
> > 'f6c3ee92-1ee8-9250-6288-bfd82b18eaa2', 'subtask_of':
> > 'OpaqueRef:2aa5b7f3-2422-7a6a-d60c-f733b5b4e536', 'args': ['8589934592',
> > ''], 'host_ref': 'OpaqueRef:96666e1c-b5c2-456c-ce42-d5de48c2c72f',
> > 'session_ref': 'OpaqueRef:c5f26499-7a8e-084c-ad7f-6f0a182f5dae',
> > 'device_config': {'SRmaster': 'true', 'serverpath': '/vol/xcp', 'server':
> > '172.16.129.11'}, 'command': 'vdi_create', 'sr_ref':
> > 'OpaqueRef:625b2d6a-de22-047d-8d1d-4217d301c7fd', 'vdi_sm_config':
> > {'vmhint': '2a743cb5-9896-f76d-25ad-b779a0f7cee6'}}
> > [21897] 2011-05-02 11:11:31.275393 ['/usr/sbin/td-util', 'create', 'vhd',
> > '8192',
> >
> '/var/run/sr-mount/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/cc22b2ac-519e-4548-bbaf-69d6a1e778ac.vhd']
> > [21897] 2011-05-02 11:11:31.290521 SUCCESS
> > [21897] 2011-05-02 11:11:31.290643 ['/usr/sbin/td-util', 'query', 'vhd',
> > '-v',
> >
> '/var/run/sr-mount/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/cc22b2ac-519e-4548-bbaf-69d6a1e778ac.vhd']
> > [21897] 2011-05-02 11:11:31.302298 SUCCESS
> > [21897] 2011-05-02 11:11:31.358807 lock: released
> > /var/lock/sm/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/sr
> > [21897] 2011-05-02 11:11:31.362893 lock: closed
> > /var/lock/sm/f6c3ee92-1ee8-9250-6288-bfd82b18eaa2/sr
> >
> > Python stack trace from nova-compute :
> >
> >
> > 2011-05-04 16:34:41,333 DEBUG nova.rpc [-] received
> {u'_context_request_id':
> > u'C-FY1X8L8OPHHY4TPZE6', u'_context_read_deleted': False, u'args':
> > {u'instance_id': 8, u'injected_files': None, u'availability_zone': None},
> > u'_context_is_admin': True, u'_context_timestamp':
> u'2011-05-04T20:34:36Z',
> > u'_context_user': u'admin', u'method': u'run_instance',
> u'_context_project':
> > u'melicloud', u'_context_remote_address': u'172.16.133.241'} from
> (pid=3796)
> > _receive /usr/lib/pymodules/python2.6/nova/rpc.py:177
> > 2011-05-04 16:34:41,334 DEBUG nova.rpc [-] unpacked context:
> {'timestamp':
> > u'2011-05-04T20:34:36Z', 'remote_address': u'172.16.133.241', 'project':
> > u'melicloud', 'is_admin': True, 'user': u'admin', 'request_id':
> > u'C-FY1X8L8OPHHY4TPZE6', 'read_deleted': False} from (pid=3796)
> > _unpack_context /usr/lib/pymodules/python2.6/nova/rpc.py:350
> > 2011-05-04 16:34:44,182 AUDIT nova.compute.manager [C-FY1X8L8OPHHY4TPZE6
> > admin melicloud] instance 8: starting...
> > 2011-05-04 16:34:44,411 DEBUG nova.rpc [-] Making asynchronous call on
> > network.novacontroller ... from (pid=3796) call
> > /usr/lib/pymodules/python2.6/nova/rpc.py:370
> > 2011-05-04 16:34:44,411 DEBUG nova.rpc [-] MSG_ID is
> > f11a7286824542449e0f9c9a790c418d from (pid=3796) call
> > /usr/lib/pymodules/python2.6/nova/rpc.py:373
> > 2011-05-04 16:34:44,973 DEBUG nova.virt.xenapi.vm_utils [-] Detected
> > DISK_RAW format for image 4, instance 8 from (pid=3796) log_disk_format
> > /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:494
>
>
> This is the first line there is an error:
> > 2011-05-04 16:34:45,566 ERROR nova.compute.manager [C-FY1X8L8OPHHY4TPZE6
> > admin melicloud] Instance '8' failed to spawn. Is virtualization enabled
> in
> > the BIOS?
>
> I wonder if that ^ is the problem?
>
> If not, maybe the stack trace below will be better read by the openstack
> devs.
>
> > (nova.compute.manager): TRACE: Traceback (most recent call last):
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 240, in
> > run_instance
> > (nova.compute.manager): TRACE:     self.driver.spawn(instance_ref)
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi_conn.py", line 188, in
> spawn
> > (nova.compute.manager): TRACE:     self._vmops.spawn(instance)
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vmops.py", line 117, in
> spawn
> > (nova.compute.manager): TRACE:     vdi_uuid = self._create_disk(instance)
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vmops.py", line 113, in
> > _create_disk
> > (nova.compute.manager): TRACE:     instance.image_id, user, project,
> > disk_image_type)
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 382, in
> > fetch_image
> > (nova.compute.manager): TRACE:     access, image_type)
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 535, in
> > _fetch_image_glance
> > (nova.compute.manager): TRACE:     session, instance_id, image, access,
> > image_type)
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 436, in
> > _fetch_image_glance_disk
> > (nova.compute.manager): TRACE:     sr_ref = safe_find_sr(session)
> > (nova.compute.manager): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 853, in
> > safe_find_sr
> > (nova.compute.manager): TRACE:     raise
> > exception.StorageRepositoryNotFound()
> > (nova.compute.manager): TRACE: StorageRepositoryNotFound: Cannot find SR
> to
> > read/write VDI.
> > (nova.compute.manager): TRACE:
> > 2011-05-04 16:34:45,851 ERROR nova.exception [-] Uncaught exception
> > (nova.exception): TRACE: Traceback (most recent call last):
> > (nova.exception): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/exception.py", line 79, in _wrap
> > (nova.exception): TRACE:     return f(*args, **kw)
> > (nova.exception): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 265, in
> > run_instance
> > (nova.exception): TRACE:     self._update_state(context, instance_id)
> > (nova.exception): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 145, in
> > _update_state
> > (nova.exception): TRACE:     info =
> > self.driver.get_info(instance_ref['name'])
> > (nova.exception): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi_conn.py", line 263, in
> > get_info
> > (nova.exception): TRACE:     return self._vmops.get_info(instance_id)
> > (nova.exception): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vmops.py", line 771, in
> > get_info
> > (nova.exception): TRACE:     vm_ref = self._get_vm_opaque_ref(instance)
> > (nova.exception): TRACE:   File
> > "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vmops.py", line 262, in
> > _get_vm_opaque_ref
> > (nova.exception): TRACE:     raise
> > exception.InstanceNotFound(instance_id=instance_obj.id)
> > (nova.exception): TRACE: UnboundLocalError: local variable 'instance_obj'
> > referenced before assignment
> > (nova.exception): TRACE:
> > 2011-05-04 16:34:45,852 ERROR nova [-] Exception during message handling
> > (nova): TRACE: Traceback (most recent call last):
> > (nova): TRACE:   File "/usr/lib/pymodules/python2.6/nova/rpc.py", line
> 198,
> > in _receive
> > (nova): TRACE:     rval = node_func(context=ctxt, **node_args)
> > (nova): TRACE:   File "/usr/lib/pymodules/python2.6/nova/exception.py",
> line
> > 85, in _wrap
> > (nova): TRACE:     raise Error(str(e))
> > (nova): TRACE: Error: local variable 'instance_obj' referenced before
> > assignment
> > (nova): TRACE:
> > 2011-05-04 16:35:26,348 INFO nova.compute.manager [-] Found instance
> > 'instance-00000008' in DB but no VM. State=4, so setting state to
> shutoff.
> > 2011-05-04 16:35:26,348 INFO nova.compute.manager [-] DB/VM state
> mismatch.
> > Changing state from '4' to '5'
> >
> > Best regards !
> > On Wed, May 4, 2011 at 5:57 PM, Todd Deshane <todd.deshane at xen.org>
> wrote:
> >>
> >> On Wed, May 4, 2011 at 4:47 PM, Leandro Reox <leandro.reox at gmail.com>
> >> wrote:
> >> > List, i almost made it, but when i launch an instance, in the logs i
> see
> >> > "StorageRepositoryNotFound: Cannot find SR to read/write VDI" this
> >> > message
> >> > is from the xenapi. (I have xcp as hypervisor).
> >> > I got an SR created via NFS, thats showns in the xe sr-list
> >> > Anybody has a clue of why is failing ?
> >> >
> >> On your XCP server, what is the contents of:
> >> /var/log/SMlog
> >>
> >> Maybe we can find a problem there.
> >>
> >> Thanks,
> >> Todd
> >>
> >> > Best Regards
> >> > Lele
> >> > _______________________________________________
> >> > Openstack-operators mailing list
> >> > Openstack-operators at lists.openstack.org
> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Todd Deshane
> >> http://www.linkedin.com/in/deshantm
> >> http://www.xen.org/products/cloudxen.html
> >> http://runningxen.com/
> >
> >
> > _______________________________________________
> > Openstack-operators mailing list
> > Openstack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
>
>
>
> --
> Todd Deshane
> http://www.linkedin.com/in/deshantm
> http://www.xen.org/products/cloudxen.html
> http://runningxen.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20110505/e6b20525/attachment-0002.html>


More information about the Openstack-operators mailing list