Help needed to Support Multi-attach feature

Gorka Eguileor geguileo at redhat.com
Mon May 13 14:48:13 UTC 2019


On 13/05, RAI, SNEHA wrote:
> Thanks Gorka for your response. The main reason is "AUTO: Connection to libvirt lost: 1".
>
> Not sure, why the connection is being lost. I tried restarting all the nova services too, but no luck.
>

Hi,

I would confirm that libvirtd.service, virtlockd.socket, and
virtlogd.socket are loaded and active.

Cheers,
Gorka.


>
>
> Regards,
>
> Sneha Rai
>
>
>
> -----Original Message-----
> From: Gorka Eguileor [mailto:geguileo at redhat.com]
> Sent: Monday, May 13, 2019 2:21 PM
> To: RAI, SNEHA <sneha.rai at hpe.com>
> Cc: openstack-dev at lists.openstack.org
> Subject: Re: Help needed to Support Multi-attach feature
>
>
>
> On 10/05, RAI, SNEHA wrote:
>
> > Thanks Gorka for your response.
>
> >
>
> > I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email.
>
> >
>
> > Current versions of libvirt and qemu:
>
> > root at CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1
>
> > root at CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0
>
> > (Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice
>
> > Bellard
>
> >
>
> > Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm.
>
> > I restarted all nova services post the changes but I can see one nova service was disabled and state was down.
>
> >
>
> > root at CSSOSBE04-B09:/etc# nova service-list
>
> > +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
>
> > | Id                                   | Binary           | Host          | Zone     | Status   | State | Updated_at                 | Disabled Reason                     | Forced down |
>
> > +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
>
> > | 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler   | CSSOSBE04-B09 | internal | enabled  | up    | 2019-05-10T05:48:59.000000 | -                                   | False       |
>
> > | ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled  | up    | 2019-05-10T05:48:49.000000 | -                                   | False       |
>
> > | bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor   | CSSOSBE04-B09 | internal | enabled  | up    | 2019-05-10T05:48:59.000000 | -                                   | False       |
>
> > | 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor   | CSSOSBE04-B09 | internal | enabled  | up    | 2019-05-10T05:48:54.000000 | -                                   | False       |
>
> > | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute     | CSSOSBE04-B09 | nova     | disabled | down  | 2019-05-07T22:11:06.000000 | AUTO: Connection to libvirt lost: 1 | False       |
>
> > +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
>
> >
>
> > So, I manually enabled the service, but the state was still down.
>
> > root at CSSOSBE04-B09:/etc# nova service-enable
>
> > 9c700ee1-1694-479b-afc0-1fd37c1a5561
>
> > +--------------------------------------+---------------+--------------+---------+
>
> > | ID                                   | Host          | Binary       | Status  |
>
> > +--------------------------------------+---------------+--------------+---------+
>
> > | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | CSSOSBE04-B09 | nova-compute
>
> > | | enabled |
>
> > +--------------------------------------+---------------+--------------+---------+
>
> >
>
> > root at CSSOSBE04-B09:/etc# nova service-list
>
> > +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
>
> > | Id                                   | Binary           | Host          | Zone     | Status  | State | Updated_at                 | Disabled Reason | Forced down |
>
> > +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
>
> > | 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler   | CSSOSBE04-B09 | internal | enabled | up    | 2019-05-10T05:49:19.000000 | -               | False       |
>
> > | ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up    | 2019-05-10T05:49:19.000000 | -               | False       |
>
> > | bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor   | CSSOSBE04-B09 | internal | enabled | up    | 2019-05-10T05:49:19.000000 | -               | False       |
>
> > | 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor   | CSSOSBE04-B09 | internal | enabled | up    | 2019-05-10T05:49:14.000000 | -               | False       |
>
> > | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute     | CSSOSBE04-B09 | nova     | enabled | down  | 2019-05-10T05:49:14.000000 | -               | False       |
>
> > +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
>
> >
>
>
>
> Hi,
>
>
>
> If it appears as down it's probably because there is an issue during the service's start procedure.
>
>
>
> You can look in the logs to see what messages appeared during the start or tail the logs and restart the service to see what error appears there.
>
>
>
> Cheers,
>
> Gorka.
>
>
>
>
>
> > So, now when I try to attach a volume to nova instance, I get the below error. As one of the service is down it fails in filter validation for nova-compute and gives us "No host" error.
>
> >
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
>
> > nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
>
> > #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter
>
> > RetryFilter returned 1 host(s)#033[00m #033[00;33m{{(pid=21775)
>
> > get_filtered_objects /opt/stack/nova/nova/filters.py:104}}#033[00m
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
>
> > nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
>
> > #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter
>
> > AvailabilityZoneFilter returned 1 host(s)#033[00m
>
> > #033[00;33m{{(pid=21775) get_filtered_objects
>
> > /opt/stack/nova/nova/filters.py:104}}#033[00m
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
>
> > nova.scheduler.filters.compute_filter [#033[01;36mNone
>
> > req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
>
> > admin#033[00;32m] #033[01;35m#033[00;32m(CSSOSBE04-B09, CSSOSBE04-B09)
>
> > ram: 30810MB disk: 1737728MB io_ops: 0 instances: 1 is disabled,
>
> > reason: AUTO: Connection to libvirt lost: 1#033[00m
>
> > #033[00;33m{{(pid=21775) host_passes
>
> > /opt/stack/nova/nova/scheduler/filters/compute_filter.py:42}}#033[00m
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO
>
> > nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
>
> > #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFilter
>
> > ComputeFilter returned 0 hosts#033[00m May 10 10:43:00 CSSOSBE04-B09
>
> > nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone
>
> > req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
>
> > admin#033[00;32m] #033[01;35m#033[00;32mFiltering removed all hosts
>
> > for the request with instance ID
>
> > '1735ece5-d187-454a-aab1-12650646a2ec'. Filter results:
>
> > [('RetryFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]),
>
> > ('AvailabilityZoneFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]),
>
> > ('ComputeFilter', None)]#033[00m #033[00;33m{{(pid=21775)
>
> > get_filtered_objects /opt/stack/nova/nova/filters.py:129}}#033[00m
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO
>
> > nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
>
> > #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFiltering
>
> > removed all hosts for the request with instance ID
>
> > '1735ece5-d187-454a-aab1-12650646a2ec'. Filter results: ['RetryFilter:
>
> > (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
>
> > 'ComputeFilter: (start: 1, end: 0)']#033[00m May 10 10:43:00
>
> > CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
>
> > nova.scheduler.filter_scheduler [#033[01;36mNone
>
> > req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
>
> > admin#033[00;32m] #033[01;35m#033[00;32mFiltered []#033[00m
>
> > #033[00;33m{{(pid=21775) _get_sorted_hosts
>
> > /opt/stack/nova/nova/scheduler/filter_scheduler.py:404}}#033[00m
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
>
> > nova.scheduler.filter_scheduler [#033[01;36mNone
>
> > req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
>
> > admin#033[00;32m] #033[01;35m#033[00;32mThere are 0 hosts available
>
> > but 1 instances requested to build.#033[00m #033[00;33m{{(pid=21775)
>
> > _ensure_sufficient_hosts
>
> > /opt/stack/nova/nova/scheduler/filter_scheduler.py:279}}#033[00m
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: #033[01;31mERROR nova.conductor.manager [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[01;31m] #033[01;35m#033[01;31mFailed to schedule instances#033[00m: NoValidHost_Remote: No valid host was found. There are not enough hosts available.
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: Traceback (most recent call last):
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 226, in inner
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:     return func(*args, **kwargs)
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:   File "/opt/stack/nova/nova/scheduler/manager.py", line 154, in select_destinations
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:     allocation_request_version, return_alternates)
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:   File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 91, in select_destinations
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:     allocation_request_version, return_alternates)
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:   File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 244, in _schedule
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:     claimed_instance_uuids)
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:   File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 281, in _ensure_sufficient_hosts
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]:     raise exception.NoValidHost(reason=reason)
>
> > May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: NoValidHost: No valid host was found. There are not enough hosts available.
>
> >
>
> > Need help in understanding on how to fix this error. For detailed logs, please refer the attached syslog.
>
> >
>
> >
>
> > Thanks & Regards,
>
> > Sneha Rai
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > -----Original Message-----
>
> > From: Gorka Eguileor [mailto:geguileo at redhat.com]
>
> > Sent: Friday, May 10, 2019 2:56 PM
>
> > To: RAI, SNEHA <sneha.rai at hpe.com<mailto:sneha.rai at hpe.com>>
>
> > Cc: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
>
> > Subject: Re: Help needed to Support Multi-attach feature
>
> >
>
> >
>
> >
>
> > On 02/05, RAI, SNEHA wrote:
>
> >
>
> > > Hi Team,
>
> >
>
> > >
>
> >
>
> > > I am currently working on multiattach feature for HPE 3PAR cinder driver.
>
> >
>
> > >
>
> >
>
> > > For this, while setting up devstack(on stable/queens) I made below
>
> >
>
> > > change in the local.conf [[local|localrc]]
>
> >
>
> > > ENABLE_VOLUME_MULTIATTACH=True ENABLE_UBUNTU_CLOUD_ARCHIVE=False
>
> >
>
> > >
>
> >
>
> > > /etc/cinder/cinder.conf:
>
> >
>
> > > [3pariscsi_1]
>
> >
>
> > > hpe3par_api_url =
>
> >
>
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__192.168.1.7-3A8
>
> > > 08
>
> >
>
> > > 0_api_v1&d=DwIBAg&c=C5b8zRQO1miGmBeVZ2LFWg&r=8drU3i56Z5sQ_Ltpya89LTN
>
> > > n3
>
> >
>
> > > xDSwtigjYbGrSY1lM8&m=zTRvI4nj8MoP0_z5MmxTYwKiNNW6addwP4L5VFG4wkg&s=a
>
> > > 2D
>
> >
>
> > > HbzzRtbbBPz0_kfodZv5X1HxbN_hFxte5rEZabAg&e=
>
> >
>
> > > hpe3par_username = user
>
> >
>
> > > hpe3par_password = password
>
> >
>
> > > san_ip = 192.168.1.7
>
> >
>
> > > san_login = user
>
> >
>
> > > san_password = password
>
> >
>
> > > volume_backend_name = 3pariscsi_1
>
> >
>
> > > hpe3par_cpg = my_cpg
>
> >
>
> > > hpe3par_iscsi_ips = 192.168.11.2,192.168.11.3 volume_driver =
>
> >
>
> > > cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
>
> >
>
> > > hpe3par_iscsi_chap_enabled = True
>
> >
>
> > > hpe3par_debug = True
>
> >
>
> > > image_volume_cache_enabled = True
>
> >
>
> > >
>
> >
>
> > > /etc/cinder/policy.json:
>
> >
>
> > > 'volume:multiattach': 'rule:admin_or_owner'
>
> >
>
> > >
>
> >
>
> > > Added https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_560067_2_cinder_volume_drivers_hpe_hpe-5F3par-5Fcommon.py&d=DwIBAg&c=C5b8zRQO1miGmBeVZ2LFWg&r=8drU3i56Z5sQ_Ltpya89LTNn3xDSwtigjYbGrSY1lM8&m=zTRvI4nj8MoP0_z5MmxTYwKiNNW6addwP4L5VFG4wkg&s=U8n1fpI-4OVYOSjST8IL0x0BRUhTLyumOpRZMJ_sVOI&e= change in the code.
>
> >
>
> > >
>
> >
>
> > > But I am getting below error in the nova log:
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [None req-2cda6e90-fd45-4bfe-960a-7fca9ba4abab demo admin] [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Instance failed block device setup: MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Traceback (most recent call last):
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/compute/manager.py", line 1615, in _prep_block_device
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     wait_func=self._await_block_device_map_created)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/virt/block_device.py", line 840, in attach_block_devices
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     _log_and_attach(device)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/virt/block_device.py", line 837, in _log_and_attach
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     bdm.attach(*attach_args, **attach_kwargs)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     ret_val = method(obj, context, *args, **kwargs)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/virt/block_device.py", line 620, in attach
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     virt_driver, do_driver_attach)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     return f(*args, **kwargs)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/virt/block_device.py", line 617, in _do_locked_attach
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     self._do_attach(*args, **_kwargs)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/virt/block_device.py", line 602, in _do_attach
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     do_driver_attach)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]   File "/opt/stack/nova/nova/virt/block_device.py", line 509, in _volume_attach
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]     volume_id=volume_id)
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
>
> >
>
> > > Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR
>
> >
>
> > > nova.compute.manager [instance:
>
> > > fcaa5a47-fc48-489d-9827-6533bfd1a9fa]
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > Apr 29 05:41:20 CSSOSBE04-B09 nova-compute[20455]: DEBUG
>
> >
>
> > > nova.virt.libvirt.driver [-] Volume multiattach is not supported
>
> > > based
>
> >
>
> > > on current versions of QEMU and libvirt. QEMU must be less than 2.10
>
> >
>
> > > or libvirt must be greater than or equal to 3.10. {{(pid=20455)
>
> >
>
> > > _set_multiattach_support
>
> >
>
> > > /opt/stack/nova/nova/virt/libvirt/driver.py:619}}
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > stack at CSSOSBE04-B09:/tmp$ virsh --version
>
> >
>
> > > 3.6.0
>
> >
>
> > > stack at CSSOSBE04-B09:/tmp$ kvm --version QEMU emulator version
>
> >
>
> > > 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.8~cloud1) Copyright (c) 2003-2017
>
> >
>
> > > Fabrice Bellard and the QEMU Project developers
>
> >
>
> > >
>
> >
>
> >
>
> >
>
> > Hi Sneha,
>
> >
>
> >
>
> >
>
> > I don't know much about this side of Nova, but reading the log error I would say that you either need to update your libvirt version from 3.6.0 to 3.10, or you need to downgrade your QEMU version to something prior to 2.10.
>
> >
>
> >
>
> >
>
> > The later is probably easier.
>
> >
>
> >
>
> >
>
> > I don't use Ubuntu, but according to the Internet you can list
>
> > available versions with "apt-cache policy qemu" and then install or
>
> > downgrade to the specific version with "sudo apt-get install
>
> > qemu=2.5\*" if you wanted to install version 2.5
>
> >
>
> >
>
> >
>
> > I hope this helps.
>
> >
>
> >
>
> >
>
> > Cheers,
>
> >
>
> > Gorka.
>
> >
>
> >
>
> >
>
> > >
>
> >
>
> > > openstack volume show -c  multiattach -c status sneha1
>
> >
>
> > > +-------------+-----------+
>
> >
>
> > > | Field       | Value     |
>
> >
>
> > > +-------------+-----------+
>
> >
>
> > > | multiattach | True      |
>
> >
>
> > > | status      | available |
>
> >
>
> > > +-------------+-----------+
>
> >
>
> > >
>
> >
>
> > > cinder extra-specs-list
>
> >
>
> > > +--------------------------------------+-------------+--------------------------------------------------------------------+
>
> >
>
> > > | ID                                   | Name        | extra_specs                                                        |
>
> >
>
> > > +--------------------------------------+-------------+--------------------------------------------------------------------+
>
> >
>
> > > | bd077fde-51c3-4581-80d5-5855e8ab2f6b | 3pariscsi_1 |
>
> >
>
> > > | {'volume_backend_name': '3pariscsi_1', 'multiattach': '<is>
>
> > > | True'}|
>
> >
>
> > > +--------------------------------------+-------------+--------------------------------------------------------------------+
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > echo $OS_COMPUTE_API_VERSION
>
> >
>
> > > 2.60
>
> >
>
> > >
>
> >
>
> > > pip list | grep python-novaclient
>
> >
>
> > > DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
>
> >
>
> > > python-novaclient            13.0.0
>
> >
>
> > >
>
> >
>
> > > How do I fix this version issue on my setup to proceed? Please help.
>
> >
>
> > >
>
> >
>
> > > Thanks & Regards,
>
> >
>
> > > Sneha Rai
>
>
>
>



More information about the openstack-discuss mailing list