[Openstack] [Cinder] Multi backend config issue

Jérôme Gallard jeronimo974 at gmail.com
Wed Apr 17 13:50:10 UTC 2013


Hi,

Yes, it's very surprising. I manage to obtain your error by doing the
operations manually (compute and guest are ubuntu 12.04 and devstack
deployment).

Another interesting thing is that, in my case, with multi-backend enabled,
tempest tells me everything is right:

/opt/stack/tempest# nosetests -sv
tempest.tests.volume.test_volumes_actions.py
nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_attach_detach_volume_to_instance[smoke]
... ok
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_get_volume_attachment
... ok

----------------------------------------------------------------------
Ran 2 tests in 122.465s

OK


I don't think that error is linked to the distribution. With my
configuration, if I remove the multi-backend option, attachment is possible.

Regards,
Jérôme


On Wed, Apr 17, 2013 at 3:22 PM, Steve Heistand <steve.heistand at nasa.gov>wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> in my case (as near as I can tell) its something to do with the inability
> for ubuntu 12.04 (as a vm) to do hot plug pci stuff.
> the node itself in as 12.04 just the vm part that doesnt work as ubuntu.
> havent tried 12.10 or rarring as a vm.
>
> steve
>
> On 04/17/2013 05:42 AM, Heiko Krämer wrote:
> > Hi Steve,
> >
> > yeah it's running ubuntu 12.04 on the nodes and on the vm.
> >
> > But configuration parsing error should have normally nothing todo with a
> distribution
> > ?! Maybe the oslo version or something like that.
> >
> > But thanks for your hint.
> >
> > Greetings Heiko
> >
> > On 17.04.2013 14:36, Steve Heistand wrote:
> >> what OS Are you running in the VM? I had similar issues with ubuntu
> 12.04 but
> >> things worked great with centos 6.4
> >>
> >>
> >> On 04/17/2013 01:15 AM, Heiko Krämer wrote:
> >>> Hi Guys,
> >>>
> >>> I'm running in a strange config issue with cinder-volume service. I
> try to use
> >>> the multi backend feature in grizzly and the scheduler works fine but
> the volume
> >>> service are not running correctly. I can create/delete volumes but not
> attach.
> >>>
> >>> My cinder.conf (abstract): / //#### Backend Configuration//
> >>> //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
> >>> //scheduler_host_manager=cinder.scheduler.host_manager.HostManager// //
> >>> //enabled_backends=storage1,storage2// //[storage1]//
> >>> //volume_group=nova-volumes//
> >>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
> >>> //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm// // //
> //[storage2]//
> >>> //volume_group=nova-volumes//
> >>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
> >>> //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm/
> >>>
> >>>
> >>>
> >>> this section is on each host the same. If i try to attach an existing
> volume to
> >>> an instance i'll get the following error on cinder-volume:
> >>>
> >>> /2013-04-16 17:18:13    AUDIT [cinder.service] Starting cinder-volume
> node
> >>> (version 2013.1)// //2013-04-16 17:18:13     INFO
> [cinder.volume.manager]
> >>> Updating volume status// //2013-04-16 17:18:13     INFO
> [cinder.volume.iscsi]
> >>> Creating iscsi_target for:
> volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
> >>> //2013-04-16 17:18:13     INFO [cinder.openstack.common.rpc.common]
> Connected to
> >>>  AMQP server on 10.0.0.104:5672// //2013-04-16 17:18:13     INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// //2013-04-16 17:18:14     INFO
> [cinder.volume.manager] Updating
> >>> volume status// //2013-04-16 17:18:14     INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// //2013-04-16 17:18:14     INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// //2013-04-16 17:18:26    ERROR
> >>> [cinder.openstack.common.rpc.amqp] Exception during message handling//
> >>> //Traceback (most recent call last):// //  File
> >>>
> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py",
> line 430,
> >>> in _process_data// //    rval = self.proxy.dispatch(ctxt, version,
> method,
> >>> **args)// //  File
> >>>
> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py",
> >>> line 133, in dispatch// //    return getattr(proxyobj, method)(ctxt,
> **kwargs)//
> >>> //  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 665,
> >>> in initialize_connection// //    return
> >>> self.driver.initialize_connection(volume_ref, connector)// //  File
> >>> "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py", line 336,
> in
> >>> initialize_connection// //    if self.configuration.iscsi_helper ==
> 'lioadm'://
> >>> //  File
> "/usr/lib/python2.7/dist-packages/cinder/volume/configuration.py", line
> >>>  83, in __getattr__// //    return getattr(self.local_conf, value)//
> //  File
> >>> "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1708, in
> >>> __getattr__// //    return self._conf._get(name, self._group)// //
>  File
> >>> "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1513, in
> _get// //
> >>> value = self._substitute(self._do_get(name, group))// //  File
> >>> "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1529, in
> _do_get// //
> >>> info = self._get_opt_info(name, group)// //  File
> >>> "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1629, in
> >>> _get_opt_info// //    raise NoSuchOptError(opt_name, group)//
> //NoSuchOptError:
> >>> no such option in group storage1: iscsi_helper/
> >>>
> >>>
> >>> It's very strange the
> >>> '/volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//'/ option
> should set
> >>> the iscsi_helper=tgtadm per default.
> >>>
> >>>
> >>> Anyone have an idea or the same issue, otherwise i'll create a bug
> report.
> >>>
> >>> Greetings from Berlin Heiko
> >>>
> >
>
> - --
> ************************************************************************
>  Steve Heistand                          NASA Ames Research Center
>  email: steve.heistand at nasa.gov          Steve Heistand/Mail Stop 258-6
>  ph: (650) 604-4369                      Bldg. 258, Rm. 232-5
>  Scientific & HPC Application            P.O. Box 1
>  Development/Optimization                Moffett Field, CA 94035-0001
> ************************************************************************
>  "Any opinions expressed are those of our alien overlords, not my own."
>
> # For Remedy                        #
> #Action: Resolve                    #
> #Resolution: Resolved               #
> #Reason: No Further Action Required #
> #Tier1: User Code                   #
> #Tier2: Other                       #
> #Tier3: Assistance                  #
> #Notification: None                 #
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.14 (GNU/Linux)
>
> iEYEARECAAYFAlFuoiwACgkQoBCTJSAkVrGGegCg12Mo9vZo5jwy+xAWgfS4ZATb
> 4n8AnRbcWjT38AO2y7sJALnMzXogfujP
> =0Y10
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130417/dd1e8664/attachment.html>


More information about the Openstack mailing list