Hi Team, I am currently working on multiattach feature for HPE 3PAR cinder driver. For this, while setting up devstack(on stable/queens) I made below change in the local.conf [[local|localrc]] ENABLE_VOLUME_MULTIATTACH=True ENABLE_UBUNTU_CLOUD_ARCHIVE=False /etc/cinder/cinder.conf: [3pariscsi_1] hpe3par_api_url = https://192.168.1.7:8080/api/v1 hpe3par_username = user hpe3par_password = password san_ip = 192.168.1.7 san_login = user san_password = password volume_backend_name = 3pariscsi_1 hpe3par_cpg = my_cpg hpe3par_iscsi_ips = 192.168.11.2,192.168.11.3 volume_driver = cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_iscsi_chap_enabled = True hpe3par_debug = True image_volume_cache_enabled = True /etc/cinder/policy.json: 'volume:multiattach': 'rule:admin_or_owner' Added https://review.opendev.org/#/c/560067/2/cinder/volume/drivers/hpe/hpe_3par_c... change in the code. But I am getting below error in the nova log: Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [None req-2cda6e90-fd45-4bfe-960a-7fca9ba4abab demo admin] [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Instance failed block device setup: MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance. Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Traceback (most recent call last): Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/compute/manager.py", line 1615, in _prep_block_device Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] wait_func=self._await_block_device_map_created) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 840, in attach_block_devices Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] _log_and_attach(device) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 837, in _log_and_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] bdm.attach(*attach_args, **attach_kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] ret_val = method(obj, context, *args, **kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 620, in attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] virt_driver, do_driver_attach) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] return f(*args, **kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 617, in _do_locked_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] self._do_attach(*args, **_kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 602, in _do_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] do_driver_attach) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 509, in _volume_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] volume_id=volume_id) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance. Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Apr 29 05:41:20 CSSOSBE04-B09 nova-compute[20455]: DEBUG nova.virt.libvirt.driver [-] Volume multiattach is not supported based on current versions of QEMU and libvirt. QEMU must be less than 2.10 or libvirt must be greater than or equal to 3.10. {{(pid=20455) _set_multiattach_support /opt/stack/nova/nova/virt/libvirt/driver.py:619}} stack@CSSOSBE04-B09:/tmp$ virsh --version 3.6.0 stack@CSSOSBE04-B09:/tmp$ kvm --version QEMU emulator version 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.8~cloud1) Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers openstack volume show -c multiattach -c status sneha1 +-------------+-----------+ | Field | Value | +-------------+-----------+ | multiattach | True | | status | available | +-------------+-----------+ cinder extra-specs-list +--------------------------------------+-------------+--------------------------------------------------------------------+ | ID | Name | extra_specs | +--------------------------------------+-------------+--------------------------------------------------------------------+ | bd077fde-51c3-4581-80d5-5855e8ab2f6b | 3pariscsi_1 | {'volume_backend_name': '3pariscsi_1', 'multiattach': '<is> True'}| +--------------------------------------+-------------+--------------------------------------------------------------------+ echo $OS_COMPUTE_API_VERSION 2.60 pip list | grep python-novaclient DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. python-novaclient 13.0.0 How do I fix this version issue on my setup to proceed? Please help. Thanks & Regards, Sneha Rai
On 02/05, RAI, SNEHA wrote:
Hi Team,
I am currently working on multiattach feature for HPE 3PAR cinder driver.
For this, while setting up devstack(on stable/queens) I made below change in the local.conf [[local|localrc]] ENABLE_VOLUME_MULTIATTACH=True ENABLE_UBUNTU_CLOUD_ARCHIVE=False
/etc/cinder/cinder.conf: [3pariscsi_1] hpe3par_api_url = https://192.168.1.7:8080/api/v1 hpe3par_username = user hpe3par_password = password san_ip = 192.168.1.7 san_login = user san_password = password volume_backend_name = 3pariscsi_1 hpe3par_cpg = my_cpg hpe3par_iscsi_ips = 192.168.11.2,192.168.11.3 volume_driver = cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_iscsi_chap_enabled = True hpe3par_debug = True image_volume_cache_enabled = True
/etc/cinder/policy.json: 'volume:multiattach': 'rule:admin_or_owner'
Added https://review.opendev.org/#/c/560067/2/cinder/volume/drivers/hpe/hpe_3par_c... change in the code.
But I am getting below error in the nova log: Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [None req-2cda6e90-fd45-4bfe-960a-7fca9ba4abab demo admin] [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Instance failed block device setup: MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance. Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Traceback (most recent call last): Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/compute/manager.py", line 1615, in _prep_block_device Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] wait_func=self._await_block_device_map_created) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 840, in attach_block_devices Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] _log_and_attach(device) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 837, in _log_and_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] bdm.attach(*attach_args, **attach_kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] ret_val = method(obj, context, *args, **kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 620, in attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] virt_driver, do_driver_attach) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] return f(*args, **kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 617, in _do_locked_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] self._do_attach(*args, **_kwargs) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 602, in _do_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] do_driver_attach) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 509, in _volume_attach Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] volume_id=volume_id) Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance. Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]
Apr 29 05:41:20 CSSOSBE04-B09 nova-compute[20455]: DEBUG nova.virt.libvirt.driver [-] Volume multiattach is not supported based on current versions of QEMU and libvirt. QEMU must be less than 2.10 or libvirt must be greater than or equal to 3.10. {{(pid=20455) _set_multiattach_support /opt/stack/nova/nova/virt/libvirt/driver.py:619}}
stack@CSSOSBE04-B09:/tmp$ virsh --version 3.6.0 stack@CSSOSBE04-B09:/tmp$ kvm --version QEMU emulator version 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.8~cloud1) Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
Hi Sneha, I don't know much about this side of Nova, but reading the log error I would say that you either need to update your libvirt version from 3.6.0 to 3.10, or you need to downgrade your QEMU version to something prior to 2.10. The later is probably easier. I don't use Ubuntu, but according to the Internet you can list available versions with "apt-cache policy qemu" and then install or downgrade to the specific version with "sudo apt-get install qemu=2.5\*" if you wanted to install version 2.5 I hope this helps. Cheers, Gorka.
openstack volume show -c multiattach -c status sneha1 +-------------+-----------+ | Field | Value | +-------------+-----------+ | multiattach | True | | status | available | +-------------+-----------+
cinder extra-specs-list +--------------------------------------+-------------+--------------------------------------------------------------------+ | ID | Name | extra_specs | +--------------------------------------+-------------+--------------------------------------------------------------------+ | bd077fde-51c3-4581-80d5-5855e8ab2f6b | 3pariscsi_1 | {'volume_backend_name': '3pariscsi_1', 'multiattach': '<is> True'}| +--------------------------------------+-------------+--------------------------------------------------------------------+
echo $OS_COMPUTE_API_VERSION 2.60
pip list | grep python-novaclient DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. python-novaclient 13.0.0
How do I fix this version issue on my setup to proceed? Please help.
Thanks & Regards, Sneha Rai
Thanks Gorka for your response. I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email. Current versions of libvirt and qemu: root@CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1 root@CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice Bellard Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm. I restarted all nova services post the changes but I can see one nova service was disabled and state was down. root@CSSOSBE04-B09:/etc# nova service-list +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down | +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+ | 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False | | ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:49.000000 | - | False | | bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False | | 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:54.000000 | - | False | | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | disabled | down | 2019-05-07T22:11:06.000000 | AUTO: Connection to libvirt lost: 1 | False | +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+ So, I manually enabled the service, but the state was still down. root@CSSOSBE04-B09:/etc# nova service-enable 9c700ee1-1694-479b-afc0-1fd37c1a5561 +--------------------------------------+---------------+--------------+---------+ | ID | Host | Binary | Status | +--------------------------------------+---------------+--------------+---------+ | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | CSSOSBE04-B09 | nova-compute | enabled | +--------------------------------------+---------------+--------------+---------+ root@CSSOSBE04-B09:/etc# nova service-list +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down | +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+ | 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False | | ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False | | bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False | | 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:14.000000 | - | False | | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | enabled | down | 2019-05-10T05:49:14.000000 | - | False | +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+ So, now when I try to attach a volume to nova instance, I get the below error. As one of the service is down it fails in filter validation for nova-compute and gives us "No host" error. May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter RetryFilter returned 1 host(s)#033[00m #033[00;33m{{(pid=21775) get_filtered_objects /opt/stack/nova/nova/filters.py:104}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter AvailabilityZoneFilter returned 1 host(s)#033[00m #033[00;33m{{(pid=21775) get_filtered_objects /opt/stack/nova/nova/filters.py:104}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.scheduler.filters.compute_filter [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32m(CSSOSBE04-B09, CSSOSBE04-B09) ram: 30810MB disk: 1737728MB io_ops: 0 instances: 1 is disabled, reason: AUTO: Connection to libvirt lost: 1#033[00m #033[00;33m{{(pid=21775) host_passes /opt/stack/nova/nova/scheduler/filters/compute_filter.py:42}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFilter ComputeFilter returned 0 hosts#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFiltering removed all hosts for the request with instance ID '1735ece5-d187-454a-aab1-12650646a2ec'. Filter results: [('RetryFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]), ('AvailabilityZoneFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]), ('ComputeFilter', None)]#033[00m #033[00;33m{{(pid=21775) get_filtered_objects /opt/stack/nova/nova/filters.py:129}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFiltering removed all hosts for the request with instance ID '1735ece5-d187-454a-aab1-12650646a2ec'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 0)']#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.scheduler.filter_scheduler [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFiltered []#033[00m #033[00;33m{{(pid=21775) _get_sorted_hosts /opt/stack/nova/nova/scheduler/filter_scheduler.py:404}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.scheduler.filter_scheduler [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mThere are 0 hosts available but 1 instances requested to build.#033[00m #033[00;33m{{(pid=21775) _ensure_sufficient_hosts /opt/stack/nova/nova/scheduler/filter_scheduler.py:279}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: #033[01;31mERROR nova.conductor.manager [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[01;31m] #033[01;35m#033[01;31mFailed to schedule instances#033[00m: NoValidHost_Remote: No valid host was found. There are not enough hosts available. May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: Traceback (most recent call last): May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 226, in inner May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: return func(*args, **kwargs) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/manager.py", line 154, in select_destinations May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 91, in select_destinations May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 244, in _schedule May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: claimed_instance_uuids) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 281, in _ensure_sufficient_hosts May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: raise exception.NoValidHost(reason=reason) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: NoValidHost: No valid host was found. There are not enough hosts available. Need help in understanding on how to fix this error. For detailed logs, please refer the attached syslog. Thanks & Regards, Sneha Rai -----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Friday, May 10, 2019 2:56 PM To: RAI, SNEHA <sneha.rai@hpe.com> Cc: openstack-dev@lists.openstack.org Subject: Re: Help needed to Support Multi-attach feature On 02/05, RAI, SNEHA wrote:
Hi Team,
I am currently working on multiattach feature for HPE 3PAR cinder driver.
For this, while setting up devstack(on stable/queens) I made below
change in the local.conf [[local|localrc]]
ENABLE_VOLUME_MULTIATTACH=True ENABLE_UBUNTU_CLOUD_ARCHIVE=False
/etc/cinder/cinder.conf:
[3pariscsi_1]
hpe3par_api_url =
https://urldefense.proofpoint.com/v2/url?u=https-3A__192.168.1.7-3A808
0_api_v1&d=DwIBAg&c=C5b8zRQO1miGmBeVZ2LFWg&r=8drU3i56Z5sQ_Ltpya89LTNn3
xDSwtigjYbGrSY1lM8&m=zTRvI4nj8MoP0_z5MmxTYwKiNNW6addwP4L5VFG4wkg&s=a2D
HbzzRtbbBPz0_kfodZv5X1HxbN_hFxte5rEZabAg&e=
hpe3par_username = user
hpe3par_password = password
san_ip = 192.168.1.7
san_login = user
san_password = password
volume_backend_name = 3pariscsi_1
hpe3par_cpg = my_cpg
hpe3par_iscsi_ips = 192.168.11.2,192.168.11.3 volume_driver =
cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
hpe3par_iscsi_chap_enabled = True
hpe3par_debug = True
image_volume_cache_enabled = True
/etc/cinder/policy.json:
'volume:multiattach': 'rule:admin_or_owner'
But I am getting below error in the nova log:
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [None req-2cda6e90-fd45-4bfe-960a-7fca9ba4abab demo admin] [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Instance failed block device setup: MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Traceback (most recent call last):
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/compute/manager.py", line 1615, in _prep_block_device
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] wait_func=self._await_block_device_map_created)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 840, in attach_block_devices
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] _log_and_attach(device)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 837, in _log_and_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] bdm.attach(*attach_args, **attach_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] ret_val = method(obj, context, *args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 620, in attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] virt_driver, do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] return f(*args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 617, in _do_locked_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] self._do_attach(*args, **_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 602, in _do_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 509, in _volume_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] volume_id=volume_id)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR
nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]
Apr 29 05:41:20 CSSOSBE04-B09 nova-compute[20455]: DEBUG
nova.virt.libvirt.driver [-] Volume multiattach is not supported based
on current versions of QEMU and libvirt. QEMU must be less than 2.10
or libvirt must be greater than or equal to 3.10. {{(pid=20455)
_set_multiattach_support
/opt/stack/nova/nova/virt/libvirt/driver.py:619}}
stack@CSSOSBE04-B09:/tmp$ virsh --version
3.6.0
stack@CSSOSBE04-B09:/tmp$ kvm --version QEMU emulator version
2.10.1(Debian 1:2.10+dfsg-0ubuntu3.8~cloud1) Copyright (c) 2003-2017
Fabrice Bellard and the QEMU Project developers
Hi Sneha, I don't know much about this side of Nova, but reading the log error I would say that you either need to update your libvirt version from 3.6.0 to 3.10, or you need to downgrade your QEMU version to something prior to 2.10. The later is probably easier. I don't use Ubuntu, but according to the Internet you can list available versions with "apt-cache policy qemu" and then install or downgrade to the specific version with "sudo apt-get install qemu=2.5\*" if you wanted to install version 2.5 I hope this helps. Cheers, Gorka.
openstack volume show -c multiattach -c status sneha1
+-------------+-----------+
| Field | Value |
+-------------+-----------+
| multiattach | True |
| status | available |
+-------------+-----------+
cinder extra-specs-list
+--------------------------------------+-------------+--------------------------------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-------------+--------------------------------------------------------------------+
| bd077fde-51c3-4581-80d5-5855e8ab2f6b | 3pariscsi_1 |
| {'volume_backend_name': '3pariscsi_1', 'multiattach': '<is> True'}|
+--------------------------------------+-------------+--------------------------------------------------------------------+
echo $OS_COMPUTE_API_VERSION
2.60
pip list | grep python-novaclient
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
python-novaclient 13.0.0
How do I fix this version issue on my setup to proceed? Please help.
Thanks & Regards,
Sneha Rai
On 10/05, RAI, SNEHA wrote:
Thanks Gorka for your response.
I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email.
Current versions of libvirt and qemu: root@CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1 root@CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice Bellard
Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm. I restarted all nova services post the changes but I can see one nova service was disabled and state was down.
root@CSSOSBE04-B09:/etc# nova service-list +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down | +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+ | 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False | | ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:49.000000 | - | False | | bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False | | 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:54.000000 | - | False | | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | disabled | down | 2019-05-07T22:11:06.000000 | AUTO: Connection to libvirt lost: 1 | False | +--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
So, I manually enabled the service, but the state was still down. root@CSSOSBE04-B09:/etc# nova service-enable 9c700ee1-1694-479b-afc0-1fd37c1a5561 +--------------------------------------+---------------+--------------+---------+ | ID | Host | Binary | Status | +--------------------------------------+---------------+--------------+---------+ | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | CSSOSBE04-B09 | nova-compute | enabled | +--------------------------------------+---------------+--------------+---------+
root@CSSOSBE04-B09:/etc# nova service-list +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down | +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+ | 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False | | ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False | | bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False | | 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:14.000000 | - | False | | 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | enabled | down | 2019-05-10T05:49:14.000000 | - | False | +--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
Hi, If it appears as down it's probably because there is an issue during the service's start procedure. You can look in the logs to see what messages appeared during the start or tail the logs and restart the service to see what error appears there. Cheers, Gorka.
So, now when I try to attach a volume to nova instance, I get the below error. As one of the service is down it fails in filter validation for nova-compute and gives us "No host" error.
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter RetryFilter returned 1 host(s)#033[00m #033[00;33m{{(pid=21775) get_filtered_objects /opt/stack/nova/nova/filters.py:104}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter AvailabilityZoneFilter returned 1 host(s)#033[00m #033[00;33m{{(pid=21775) get_filtered_objects /opt/stack/nova/nova/filters.py:104}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.scheduler.filters.compute_filter [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32m(CSSOSBE04-B09, CSSOSBE04-B09) ram: 30810MB disk: 1737728MB io_ops: 0 instances: 1 is disabled, reason: AUTO: Connection to libvirt lost: 1#033[00m #033[00;33m{{(pid=21775) host_passes /opt/stack/nova/nova/scheduler/filters/compute_filter.py:42}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFilter ComputeFilter returned 0 hosts#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFiltering removed all hosts for the request with instance ID '1735ece5-d187-454a-aab1-12650646a2ec'. Filter results: [('RetryFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]), ('AvailabilityZoneFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]), ('ComputeFilter', None)]#033[00m #033[00;33m{{(pid=21775) get_filtered_objects /opt/stack/nova/nova/filters.py:129}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFiltering removed all hosts for the request with instance ID '1735ece5-d187-454a-aab1-12650646a2ec'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 0)']#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.scheduler.filter_scheduler [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFiltered []#033[00m #033[00;33m{{(pid=21775) _get_sorted_hosts /opt/stack/nova/nova/scheduler/filter_scheduler.py:404}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG nova.scheduler.filter_scheduler [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mThere are 0 hosts available but 1 instances requested to build.#033[00m #033[00;33m{{(pid=21775) _ensure_sufficient_hosts /opt/stack/nova/nova/scheduler/filter_scheduler.py:279}}#033[00m May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: #033[01;31mERROR nova.conductor.manager [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[01;31m] #033[01;35m#033[01;31mFailed to schedule instances#033[00m: NoValidHost_Remote: No valid host was found. There are not enough hosts available. May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: Traceback (most recent call last): May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 226, in inner May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: return func(*args, **kwargs) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/manager.py", line 154, in select_destinations May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 91, in select_destinations May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 244, in _schedule May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: claimed_instance_uuids) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 281, in _ensure_sufficient_hosts May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: raise exception.NoValidHost(reason=reason) May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: NoValidHost: No valid host was found. There are not enough hosts available.
Need help in understanding on how to fix this error. For detailed logs, please refer the attached syslog.
Thanks & Regards, Sneha Rai
-----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Friday, May 10, 2019 2:56 PM To: RAI, SNEHA <sneha.rai@hpe.com> Cc: openstack-dev@lists.openstack.org Subject: Re: Help needed to Support Multi-attach feature
On 02/05, RAI, SNEHA wrote:
Hi Team,
I am currently working on multiattach feature for HPE 3PAR cinder driver.
For this, while setting up devstack(on stable/queens) I made below
change in the local.conf [[local|localrc]]
ENABLE_VOLUME_MULTIATTACH=True ENABLE_UBUNTU_CLOUD_ARCHIVE=False
/etc/cinder/cinder.conf:
[3pariscsi_1]
hpe3par_api_url =
https://urldefense.proofpoint.com/v2/url?u=https-3A__192.168.1.7-3A808
0_api_v1&d=DwIBAg&c=C5b8zRQO1miGmBeVZ2LFWg&r=8drU3i56Z5sQ_Ltpya89LTNn3
xDSwtigjYbGrSY1lM8&m=zTRvI4nj8MoP0_z5MmxTYwKiNNW6addwP4L5VFG4wkg&s=a2D
HbzzRtbbBPz0_kfodZv5X1HxbN_hFxte5rEZabAg&e=
hpe3par_username = user
hpe3par_password = password
san_ip = 192.168.1.7
san_login = user
san_password = password
volume_backend_name = 3pariscsi_1
hpe3par_cpg = my_cpg
hpe3par_iscsi_ips = 192.168.11.2,192.168.11.3 volume_driver =
cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
hpe3par_iscsi_chap_enabled = True
hpe3par_debug = True
image_volume_cache_enabled = True
/etc/cinder/policy.json:
'volume:multiattach': 'rule:admin_or_owner'
But I am getting below error in the nova log:
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [None req-2cda6e90-fd45-4bfe-960a-7fca9ba4abab demo admin] [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Instance failed block device setup: MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Traceback (most recent call last):
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/compute/manager.py", line 1615, in _prep_block_device
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] wait_func=self._await_block_device_map_created)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 840, in attach_block_devices
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] _log_and_attach(device)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 837, in _log_and_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] bdm.attach(*attach_args, **attach_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] ret_val = method(obj, context, *args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 620, in attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] virt_driver, do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] return f(*args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 617, in _do_locked_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] self._do_attach(*args, **_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 602, in _do_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 509, in _volume_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] volume_id=volume_id)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR
nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa]
Apr 29 05:41:20 CSSOSBE04-B09 nova-compute[20455]: DEBUG
nova.virt.libvirt.driver [-] Volume multiattach is not supported based
on current versions of QEMU and libvirt. QEMU must be less than 2.10
or libvirt must be greater than or equal to 3.10. {{(pid=20455)
_set_multiattach_support
/opt/stack/nova/nova/virt/libvirt/driver.py:619}}
stack@CSSOSBE04-B09:/tmp$ virsh --version
3.6.0
stack@CSSOSBE04-B09:/tmp$ kvm --version QEMU emulator version
2.10.1(Debian 1:2.10+dfsg-0ubuntu3.8~cloud1) Copyright (c) 2003-2017
Fabrice Bellard and the QEMU Project developers
Hi Sneha,
I don't know much about this side of Nova, but reading the log error I would say that you either need to update your libvirt version from 3.6.0 to 3.10, or you need to downgrade your QEMU version to something prior to 2.10.
The later is probably easier.
I don't use Ubuntu, but according to the Internet you can list available versions with "apt-cache policy qemu" and then install or downgrade to the specific version with "sudo apt-get install qemu=2.5\*" if you wanted to install version 2.5
I hope this helps.
Cheers,
Gorka.
openstack volume show -c multiattach -c status sneha1
+-------------+-----------+
| Field | Value |
+-------------+-----------+
| multiattach | True |
| status | available |
+-------------+-----------+
cinder extra-specs-list
+--------------------------------------+-------------+--------------------------------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-------------+--------------------------------------------------------------------+
| bd077fde-51c3-4581-80d5-5855e8ab2f6b | 3pariscsi_1 |
| {'volume_backend_name': '3pariscsi_1', 'multiattach': '<is> True'}|
+--------------------------------------+-------------+--------------------------------------------------------------------+
echo $OS_COMPUTE_API_VERSION
2.60
pip list | grep python-novaclient
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
python-novaclient 13.0.0
How do I fix this version issue on my setup to proceed? Please help.
Thanks & Regards,
Sneha Rai
Thanks Gorka for your response. The main reason is "AUTO: Connection to libvirt lost: 1". Not sure, why the connection is being lost. I tried restarting all the nova services too, but no luck. Regards, Sneha Rai -----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Monday, May 13, 2019 2:21 PM To: RAI, SNEHA <sneha.rai@hpe.com> Cc: openstack-dev@lists.openstack.org Subject: Re: Help needed to Support Multi-attach feature On 10/05, RAI, SNEHA wrote:
Thanks Gorka for your response.
I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email.
Current versions of libvirt and qemu:
root@CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1
root@CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0
(Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice
Bellard
Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm.
I restarted all nova services post the changes but I can see one nova service was disabled and state was down.
root@CSSOSBE04-B09:/etc# nova service-list
+--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
| 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False |
| ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:49.000000 | - | False |
| bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False |
| 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:54.000000 | - | False |
| 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | disabled | down | 2019-05-07T22:11:06.000000 | AUTO: Connection to libvirt lost: 1 | False |
+--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
So, I manually enabled the service, but the state was still down.
root@CSSOSBE04-B09:/etc# nova service-enable
9c700ee1-1694-479b-afc0-1fd37c1a5561
+--------------------------------------+---------------+--------------+---------+
| ID | Host | Binary | Status |
+--------------------------------------+---------------+--------------+---------+
| 9c700ee1-1694-479b-afc0-1fd37c1a5561 | CSSOSBE04-B09 | nova-compute
| | enabled |
+--------------------------------------+---------------+--------------+---------+
root@CSSOSBE04-B09:/etc# nova service-list
+--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
| 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False |
| ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False |
| bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False |
| 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:14.000000 | - | False |
| 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | enabled | down | 2019-05-10T05:49:14.000000 | - | False |
+--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
Hi, If it appears as down it's probably because there is an issue during the service's start procedure. You can look in the logs to see what messages appeared during the start or tail the logs and restart the service to see what error appears there. Cheers, Gorka.
So, now when I try to attach a volume to nova instance, I get the below error. As one of the service is down it fails in filter validation for nova-compute and gives us "No host" error.
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter
RetryFilter returned 1 host(s)#033[00m #033[00;33m{{(pid=21775)
get_filtered_objects /opt/stack/nova/nova/filters.py:104}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter
AvailabilityZoneFilter returned 1 host(s)#033[00m
#033[00;33m{{(pid=21775) get_filtered_objects
/opt/stack/nova/nova/filters.py:104}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.scheduler.filters.compute_filter [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32m(CSSOSBE04-B09, CSSOSBE04-B09)
ram: 30810MB disk: 1737728MB io_ops: 0 instances: 1 is disabled,
reason: AUTO: Connection to libvirt lost: 1#033[00m
#033[00;33m{{(pid=21775) host_passes
/opt/stack/nova/nova/scheduler/filters/compute_filter.py:42}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFilter
ComputeFilter returned 0 hosts#033[00m May 10 10:43:00 CSSOSBE04-B09
nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32mFiltering removed all hosts
for the request with instance ID
'1735ece5-d187-454a-aab1-12650646a2ec'. Filter results:
[('RetryFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]),
('AvailabilityZoneFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]),
('ComputeFilter', None)]#033[00m #033[00;33m{{(pid=21775)
get_filtered_objects /opt/stack/nova/nova/filters.py:129}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFiltering
removed all hosts for the request with instance ID
'1735ece5-d187-454a-aab1-12650646a2ec'. Filter results: ['RetryFilter:
(start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
'ComputeFilter: (start: 1, end: 0)']#033[00m May 10 10:43:00
CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.scheduler.filter_scheduler [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32mFiltered []#033[00m
#033[00;33m{{(pid=21775) _get_sorted_hosts
/opt/stack/nova/nova/scheduler/filter_scheduler.py:404}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.scheduler.filter_scheduler [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32mThere are 0 hosts available
but 1 instances requested to build.#033[00m #033[00;33m{{(pid=21775)
_ensure_sufficient_hosts
/opt/stack/nova/nova/scheduler/filter_scheduler.py:279}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: #033[01;31mERROR nova.conductor.manager [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[01;31m] #033[01;35m#033[01;31mFailed to schedule instances#033[00m: NoValidHost_Remote: No valid host was found. There are not enough hosts available.
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: Traceback (most recent call last):
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 226, in inner
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: return func(*args, **kwargs)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/manager.py", line 154, in select_destinations
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 91, in select_destinations
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 244, in _schedule
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: claimed_instance_uuids)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 281, in _ensure_sufficient_hosts
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: raise exception.NoValidHost(reason=reason)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: NoValidHost: No valid host was found. There are not enough hosts available.
Need help in understanding on how to fix this error. For detailed logs, please refer the attached syslog.
Thanks & Regards,
Sneha Rai
-----Original Message-----
From: Gorka Eguileor [mailto:geguileo@redhat.com]
Sent: Friday, May 10, 2019 2:56 PM
To: RAI, SNEHA <sneha.rai@hpe.com<mailto:sneha.rai@hpe.com>>
Cc: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: Help needed to Support Multi-attach feature
On 02/05, RAI, SNEHA wrote:
Hi Team,
I am currently working on multiattach feature for HPE 3PAR cinder driver.
For this, while setting up devstack(on stable/queens) I made below
change in the local.conf [[local|localrc]]
ENABLE_VOLUME_MULTIATTACH=True ENABLE_UBUNTU_CLOUD_ARCHIVE=False
/etc/cinder/cinder.conf:
[3pariscsi_1]
hpe3par_api_url =
https://urldefense.proofpoint.com/v2/url?u=https-3A__192.168.1.7-3A8
08
0_api_v1&d=DwIBAg&c=C5b8zRQO1miGmBeVZ2LFWg&r=8drU3i56Z5sQ_Ltpya89LTN
n3
xDSwtigjYbGrSY1lM8&m=zTRvI4nj8MoP0_z5MmxTYwKiNNW6addwP4L5VFG4wkg&s=a
2D
HbzzRtbbBPz0_kfodZv5X1HxbN_hFxte5rEZabAg&e=
hpe3par_username = user
hpe3par_password = password
san_ip = 192.168.1.7
san_login = user
san_password = password
volume_backend_name = 3pariscsi_1
hpe3par_cpg = my_cpg
hpe3par_iscsi_ips = 192.168.11.2,192.168.11.3 volume_driver =
cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
hpe3par_iscsi_chap_enabled = True
hpe3par_debug = True
image_volume_cache_enabled = True
/etc/cinder/policy.json:
'volume:multiattach': 'rule:admin_or_owner'
But I am getting below error in the nova log:
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [None req-2cda6e90-fd45-4bfe-960a-7fca9ba4abab demo admin] [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Instance failed block device setup: MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Traceback (most recent call last):
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/compute/manager.py", line 1615, in _prep_block_device
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] wait_func=self._await_block_device_map_created)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 840, in attach_block_devices
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] _log_and_attach(device)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 837, in _log_and_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] bdm.attach(*attach_args, **attach_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] ret_val = method(obj, context, *args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 620, in attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] virt_driver, do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] return f(*args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 617, in _do_locked_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] self._do_attach(*args, **_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 602, in _do_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 509, in _volume_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] volume_id=volume_id)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR
nova.compute.manager [instance:
fcaa5a47-fc48-489d-9827-6533bfd1a9fa]
Apr 29 05:41:20 CSSOSBE04-B09 nova-compute[20455]: DEBUG
nova.virt.libvirt.driver [-] Volume multiattach is not supported
based
on current versions of QEMU and libvirt. QEMU must be less than 2.10
or libvirt must be greater than or equal to 3.10. {{(pid=20455)
_set_multiattach_support
/opt/stack/nova/nova/virt/libvirt/driver.py:619}}
stack@CSSOSBE04-B09:/tmp$ virsh --version
3.6.0
stack@CSSOSBE04-B09:/tmp$ kvm --version QEMU emulator version
2.10.1(Debian 1:2.10+dfsg-0ubuntu3.8~cloud1) Copyright (c) 2003-2017
Fabrice Bellard and the QEMU Project developers
Hi Sneha,
I don't know much about this side of Nova, but reading the log error I would say that you either need to update your libvirt version from 3.6.0 to 3.10, or you need to downgrade your QEMU version to something prior to 2.10.
The later is probably easier.
I don't use Ubuntu, but according to the Internet you can list
available versions with "apt-cache policy qemu" and then install or
downgrade to the specific version with "sudo apt-get install
qemu=2.5\*" if you wanted to install version 2.5
I hope this helps.
Cheers,
Gorka.
openstack volume show -c multiattach -c status sneha1
+-------------+-----------+
| Field | Value |
+-------------+-----------+
| multiattach | True |
| status | available |
+-------------+-----------+
cinder extra-specs-list
+--------------------------------------+-------------+--------------------------------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-------------+--------------------------------------------------------------------+
| bd077fde-51c3-4581-80d5-5855e8ab2f6b | 3pariscsi_1 |
| {'volume_backend_name': '3pariscsi_1', 'multiattach': '<is>
| True'}|
+--------------------------------------+-------------+--------------------------------------------------------------------+
echo $OS_COMPUTE_API_VERSION
2.60
pip list | grep python-novaclient
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
python-novaclient 13.0.0
How do I fix this version issue on my setup to proceed? Please help.
Thanks & Regards,
Sneha Rai
On 13/05, RAI, SNEHA wrote:
Thanks Gorka for your response. The main reason is "AUTO: Connection to libvirt lost: 1".
Not sure, why the connection is being lost. I tried restarting all the nova services too, but no luck.
Hi, I would confirm that libvirtd.service, virtlockd.socket, and virtlogd.socket are loaded and active. Cheers, Gorka.
Regards,
Sneha Rai
-----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Monday, May 13, 2019 2:21 PM To: RAI, SNEHA <sneha.rai@hpe.com> Cc: openstack-dev@lists.openstack.org Subject: Re: Help needed to Support Multi-attach feature
On 10/05, RAI, SNEHA wrote:
Thanks Gorka for your response.
I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email.
Current versions of libvirt and qemu:
root@CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1
root@CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0
(Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice
Bellard
Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm.
I restarted all nova services post the changes but I can see one nova service was disabled and state was down.
root@CSSOSBE04-B09:/etc# nova service-list
+--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
| 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False |
| ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:49.000000 | - | False |
| bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:59.000000 | - | False |
| 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:48:54.000000 | - | False |
| 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | disabled | down | 2019-05-07T22:11:06.000000 | AUTO: Connection to libvirt lost: 1 | False |
+--------------------------------------+------------------+---------------+----------+----------+-------+----------------------------+-------------------------------------+-------------+
So, I manually enabled the service, but the state was still down.
root@CSSOSBE04-B09:/etc# nova service-enable
9c700ee1-1694-479b-afc0-1fd37c1a5561
+--------------------------------------+---------------+--------------+---------+
| ID | Host | Binary | Status |
+--------------------------------------+---------------+--------------+---------+
| 9c700ee1-1694-479b-afc0-1fd37c1a5561 | CSSOSBE04-B09 | nova-compute
| | enabled |
+--------------------------------------+---------------+--------------+---------+
root@CSSOSBE04-B09:/etc# nova service-list
+--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
| 1ebcd1f6-b7dc-40ce-8d7b-95d60503c0ff | nova-scheduler | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False |
| ed82277c-d2e0-4a1a-adf6-9bcdcc50ba29 | nova-consoleauth | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False |
| bc2b6703-7a1e-4f07-96b9-35cbb14398d5 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:19.000000 | - | False |
| 72ecbc1d-1b47-4f55-a18d-de2fbf1771e9 | nova-conductor | CSSOSBE04-B09 | internal | enabled | up | 2019-05-10T05:49:14.000000 | - | False |
| 9c700ee1-1694-479b-afc0-1fd37c1a5561 | nova-compute | CSSOSBE04-B09 | nova | enabled | down | 2019-05-10T05:49:14.000000 | - | False |
+--------------------------------------+------------------+---------------+----------+---------+-------+----------------------------+-----------------+-------------+
Hi,
If it appears as down it's probably because there is an issue during the service's start procedure.
You can look in the logs to see what messages appeared during the start or tail the logs and restart the service to see what error appears there.
Cheers,
Gorka.
So, now when I try to attach a volume to nova instance, I get the below error. As one of the service is down it fails in filter validation for nova-compute and gives us "No host" error.
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter
RetryFilter returned 1 host(s)#033[00m #033[00;33m{{(pid=21775)
get_filtered_objects /opt/stack/nova/nova/filters.py:104}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mFilter
AvailabilityZoneFilter returned 1 host(s)#033[00m
#033[00;33m{{(pid=21775) get_filtered_objects
/opt/stack/nova/nova/filters.py:104}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.scheduler.filters.compute_filter [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32m(CSSOSBE04-B09, CSSOSBE04-B09)
ram: 30810MB disk: 1737728MB io_ops: 0 instances: 1 is disabled,
reason: AUTO: Connection to libvirt lost: 1#033[00m
#033[00;33m{{(pid=21775) host_passes
/opt/stack/nova/nova/scheduler/filters/compute_filter.py:42}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFilter
ComputeFilter returned 0 hosts#033[00m May 10 10:43:00 CSSOSBE04-B09
nova-scheduler[21775]: #033[00;32mDEBUG nova.filters [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32mFiltering removed all hosts
for the request with instance ID
'1735ece5-d187-454a-aab1-12650646a2ec'. Filter results:
[('RetryFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]),
('AvailabilityZoneFilter', [(u'CSSOSBE04-B09', u'CSSOSBE04-B09')]),
('ComputeFilter', None)]#033[00m #033[00;33m{{(pid=21775)
get_filtered_objects /opt/stack/nova/nova/filters.py:129}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;36mINFO
nova.filters [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349
#033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mFiltering
removed all hosts for the request with instance ID
'1735ece5-d187-454a-aab1-12650646a2ec'. Filter results: ['RetryFilter:
(start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
'ComputeFilter: (start: 1, end: 0)']#033[00m May 10 10:43:00
CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.scheduler.filter_scheduler [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32mFiltered []#033[00m
#033[00;33m{{(pid=21775) _get_sorted_hosts
/opt/stack/nova/nova/scheduler/filter_scheduler.py:404}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-scheduler[21775]: #033[00;32mDEBUG
nova.scheduler.filter_scheduler [#033[01;36mNone
req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo
admin#033[00;32m] #033[01;35m#033[00;32mThere are 0 hosts available
but 1 instances requested to build.#033[00m #033[00;33m{{(pid=21775)
_ensure_sufficient_hosts
/opt/stack/nova/nova/scheduler/filter_scheduler.py:279}}#033[00m
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: #033[01;31mERROR nova.conductor.manager [#033[01;36mNone req-b0ca81b3-a2b6-492e-9036-249644b94349 #033[00;36mdemo admin#033[01;31m] #033[01;35m#033[01;31mFailed to schedule instances#033[00m: NoValidHost_Remote: No valid host was found. There are not enough hosts available.
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: Traceback (most recent call last):
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 226, in inner
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: return func(*args, **kwargs)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/manager.py", line 154, in select_destinations
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 91, in select_destinations
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: allocation_request_version, return_alternates)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 244, in _schedule
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: claimed_instance_uuids)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 281, in _ensure_sufficient_hosts
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: raise exception.NoValidHost(reason=reason)
May 10 10:43:00 CSSOSBE04-B09 nova-conductor[21789]: NoValidHost: No valid host was found. There are not enough hosts available.
Need help in understanding on how to fix this error. For detailed logs, please refer the attached syslog.
Thanks & Regards,
Sneha Rai
-----Original Message-----
From: Gorka Eguileor [mailto:geguileo@redhat.com]
Sent: Friday, May 10, 2019 2:56 PM
To: RAI, SNEHA <sneha.rai@hpe.com<mailto:sneha.rai@hpe.com>>
Cc: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: Help needed to Support Multi-attach feature
On 02/05, RAI, SNEHA wrote:
Hi Team,
I am currently working on multiattach feature for HPE 3PAR cinder driver.
For this, while setting up devstack(on stable/queens) I made below
change in the local.conf [[local|localrc]]
ENABLE_VOLUME_MULTIATTACH=True ENABLE_UBUNTU_CLOUD_ARCHIVE=False
/etc/cinder/cinder.conf:
[3pariscsi_1]
hpe3par_api_url =
https://urldefense.proofpoint.com/v2/url?u=https-3A__192.168.1.7-3A8
08
0_api_v1&d=DwIBAg&c=C5b8zRQO1miGmBeVZ2LFWg&r=8drU3i56Z5sQ_Ltpya89LTN
n3
xDSwtigjYbGrSY1lM8&m=zTRvI4nj8MoP0_z5MmxTYwKiNNW6addwP4L5VFG4wkg&s=a
2D
HbzzRtbbBPz0_kfodZv5X1HxbN_hFxte5rEZabAg&e=
hpe3par_username = user
hpe3par_password = password
san_ip = 192.168.1.7
san_login = user
san_password = password
volume_backend_name = 3pariscsi_1
hpe3par_cpg = my_cpg
hpe3par_iscsi_ips = 192.168.11.2,192.168.11.3 volume_driver =
cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
hpe3par_iscsi_chap_enabled = True
hpe3par_debug = True
image_volume_cache_enabled = True
/etc/cinder/policy.json:
'volume:multiattach': 'rule:admin_or_owner'
But I am getting below error in the nova log:
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [None req-2cda6e90-fd45-4bfe-960a-7fca9ba4abab demo admin] [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Instance failed block device setup: MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] Traceback (most recent call last):
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/compute/manager.py", line 1615, in _prep_block_device
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] wait_func=self._await_block_device_map_created)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 840, in attach_block_devices
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] _log_and_attach(device)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 837, in _log_and_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] bdm.attach(*attach_args, **attach_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] ret_val = method(obj, context, *args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 620, in attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] virt_driver, do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] return f(*args, **kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 617, in _do_locked_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] self._do_attach(*args, **_kwargs)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 602, in _do_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] do_driver_attach)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] File "/opt/stack/nova/nova/virt/block_device.py", line 509, in _volume_attach
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] volume_id=volume_id)
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR nova.compute.manager [instance: fcaa5a47-fc48-489d-9827-6533bfd1a9fa] MultiattachNotSupportedByVirtDriver: Volume dc25f09a-6ae1-4b06-a814-73a8afaba62f has 'multiattach' set, which is not supported for this instance.
Apr 29 04:23:04 CSSOSBE04-B09 nova-compute[31396]: ERROR
nova.compute.manager [instance:
fcaa5a47-fc48-489d-9827-6533bfd1a9fa]
Apr 29 05:41:20 CSSOSBE04-B09 nova-compute[20455]: DEBUG
nova.virt.libvirt.driver [-] Volume multiattach is not supported
based
on current versions of QEMU and libvirt. QEMU must be less than 2.10
or libvirt must be greater than or equal to 3.10. {{(pid=20455)
_set_multiattach_support
/opt/stack/nova/nova/virt/libvirt/driver.py:619}}
stack@CSSOSBE04-B09:/tmp$ virsh --version
3.6.0
stack@CSSOSBE04-B09:/tmp$ kvm --version QEMU emulator version
2.10.1(Debian 1:2.10+dfsg-0ubuntu3.8~cloud1) Copyright (c) 2003-2017
Fabrice Bellard and the QEMU Project developers
Hi Sneha,
I don't know much about this side of Nova, but reading the log error I would say that you either need to update your libvirt version from 3.6.0 to 3.10, or you need to downgrade your QEMU version to something prior to 2.10.
The later is probably easier.
I don't use Ubuntu, but according to the Internet you can list
available versions with "apt-cache policy qemu" and then install or
downgrade to the specific version with "sudo apt-get install
qemu=2.5\*" if you wanted to install version 2.5
I hope this helps.
Cheers,
Gorka.
openstack volume show -c multiattach -c status sneha1
+-------------+-----------+
| Field | Value |
+-------------+-----------+
| multiattach | True |
| status | available |
+-------------+-----------+
cinder extra-specs-list
+--------------------------------------+-------------+--------------------------------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-------------+--------------------------------------------------------------------+
| bd077fde-51c3-4581-80d5-5855e8ab2f6b | 3pariscsi_1 |
| {'volume_backend_name': '3pariscsi_1', 'multiattach': '<is>
| True'}|
+--------------------------------------+-------------+--------------------------------------------------------------------+
echo $OS_COMPUTE_API_VERSION
2.60
pip list | grep python-novaclient
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
python-novaclient 13.0.0
How do I fix this version issue on my setup to proceed? Please help.
Thanks & Regards,
Sneha Rai
On Fri, May 10, 2019 at 04:51:07PM +0000, RAI, SNEHA wrote:
Thanks Gorka for your response.
I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email.
Current versions of libvirt and qemu: root@CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1 root@CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice Bellard
Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm. I restarted all nova services post the changes but I can see one nova service was disabled and state was down.
Not sure if it is related or not, but I don't believe you want to change virt_type t0 "qemu". That should stay "kvm".
Thanks Sean for your response. Setting virt_type to kvm doesn’t help. n-cpu service is failing to come up. Journalctl logs of n-cpu service: May 14 02:07:05 CSSOSBE04-B09 systemd[1]: Started Devstack devstack@n-cpu.service. May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' {{(pid=15989) initialize /usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:46}} May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' {{(pid=15989) initialize /usr/local/lib/python2.7/dist- May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: WARNING oslo_config.cfg [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] Option "use_neutron" from group "DEFAULT" is deprecated for removal ( May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: nova-network is deprecated, as are any related configuration options. May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ). Its value may be silently ignored in the future. May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: DEBUG oslo_policy.policy [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] The policy file policy.json could not be found. {{(pid=15989) load_rules /usr/local/lib/python2.7/dist- May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: INFO nova.virt.driver [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] Loading compute driver 'libvirt.LibvirtDriver' May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] Unable to load the virtualization driver: ImportError: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `L May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver Traceback (most recent call last): May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/opt/stack/nova/nova/virt/driver.py", line 1700, in load_compute_driver May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver virtapi) May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 44, in import_object May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver return import_class(import_str)(*args, **kwargs) May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 346, in __init__ May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver libvirt = importutils.import_module('libvirt') May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 73, in import_module May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver __import__(import_str) May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/home/stack/.local/lib/python2.7/site-packages/libvirt.py", line 28, in <module> May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver raise lib_e May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver ImportError: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.2.0' not found (required by /home/stack/.local/lib/python2.7/site-packages/libvirtmod.so) May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Main process exited, code=exited, status=1/FAILURE May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Unit entered failed state. May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Failed with result 'exit-code'. root@CSSOSBE04-B09:/etc# sudo systemctl status devstack@n-cpu.service ● devstack@n-cpu.service - Devstack devstack@n-cpu.service Loaded: loaded (/etc/systemd/system/devstack@n-cpu.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2019-05-14 02:07:08 IST; 7min ago Process: 15989 ExecStart=/usr/local/bin/nova-compute --config-file /etc/nova/nova-cpu.conf (code=exited, status=1/FAILURE) Main PID: 15989 (code=exited, status=1/FAILURE) May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver libvirt = importutils.import_module('libvirt') May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 73, in import_module May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver __import__(import_str) May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/home/stack/.local/lib/python2.7/site-packages/libvirt.py", line 28, in <module> May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver raise lib_e May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver ImportError: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.2.0' not found (required by /home/stack/.local/lib/python2.7/site-packages/libvirtmod.so) May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Main process exited, code=exited, status=1/FAILURE May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Unit entered failed state. May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Failed with result 'exit-code'. Regards, Sneha Rai -----Original Message----- From: Sean McGinnis [mailto:sean.mcginnis@gmx.com] Sent: Tuesday, May 14, 2019 1:33 AM To: RAI, SNEHA <sneha.rai@hpe.com> Cc: Gorka Eguileor <geguileo@redhat.com>; openstack-dev@lists.openstack.org Subject: Re: Help needed to Support Multi-attach feature On Fri, May 10, 2019 at 04:51:07PM +0000, RAI, SNEHA wrote:
Thanks Gorka for your response.
I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email.
Current versions of libvirt and qemu:
root@CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1
root@CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0
(Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice
Bellard
Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm.
I restarted all nova services post the changes but I can see one nova service was disabled and state was down.
Not sure if it is related or not, but I don't believe you want to change virt_type t0 "qemu". That should stay "kvm".
On 14/05, RAI, SNEHA wrote:
Thanks Sean for your response.
Setting virt_type to kvm doesn’t help. n-cpu service is failing to come up.
Journalctl logs of n-cpu service:
May 14 02:07:05 CSSOSBE04-B09 systemd[1]: Started Devstack devstack@n-cpu.service.
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' {{(pid=15989) initialize /usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:46}}
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' {{(pid=15989) initialize /usr/local/lib/python2.7/dist-
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: WARNING oslo_config.cfg [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] Option "use_neutron" from group "DEFAULT" is deprecated for removal (
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: nova-network is deprecated, as are any related configuration options.
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ). Its value may be silently ignored in the future.
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: DEBUG oslo_policy.policy [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] The policy file policy.json could not be found. {{(pid=15989) load_rules /usr/local/lib/python2.7/dist-
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: INFO nova.virt.driver [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] Loading compute driver 'libvirt.LibvirtDriver'
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver [None req-9dc9d20c-b002-4b34-a123-81612cdc47fc None None] Unable to load the virtualization driver: ImportError: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `L
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver Traceback (most recent call last):
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/opt/stack/nova/nova/virt/driver.py", line 1700, in load_compute_driver
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver virtapi)
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 44, in import_object
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver return import_class(import_str)(*args, **kwargs)
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 346, in __init__
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver libvirt = importutils.import_module('libvirt')
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 73, in import_module
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver __import__(import_str)
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/home/stack/.local/lib/python2.7/site-packages/libvirt.py", line 28, in <module>
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver raise lib_e
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver ImportError: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.2.0' not found (required by /home/stack/.local/lib/python2.7/site-packages/libvirtmod.so)
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver
May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Main process exited, code=exited, status=1/FAILURE
May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Unit entered failed state.
May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Failed with result 'exit-code'.
root@CSSOSBE04-B09:/etc# sudo systemctl status devstack@n-cpu.service
● devstack@n-cpu.service - Devstack devstack@n-cpu.service
Loaded: loaded (/etc/systemd/system/devstack@n-cpu.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2019-05-14 02:07:08 IST; 7min ago
Process: 15989 ExecStart=/usr/local/bin/nova-compute --config-file /etc/nova/nova-cpu.conf (code=exited, status=1/FAILURE)
Main PID: 15989 (code=exited, status=1/FAILURE)
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver libvirt = importutils.import_module('libvirt')
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 73, in import_module
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver __import__(import_str)
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver File "/home/stack/.local/lib/python2.7/site-packages/libvirt.py", line 28, in <module>
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver raise lib_e
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver ImportError: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.2.0' not found (required by /home/stack/.local/lib/python2.7/site-packages/libvirtmod.so)
May 14 02:07:08 CSSOSBE04-B09 nova-compute[15989]: ERROR nova.virt.driver
May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Main process exited, code=exited, status=1/FAILURE
May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Unit entered failed state.
May 14 02:07:08 CSSOSBE04-B09 systemd[1]: devstack@n-cpu.service: Failed with result 'exit-code'.
Hi, This looks like a compatibility issue between the libvirt-python package that's installed in /home/stack/.local/lib/python2.7/site-packages/ an the system's libvirt version in /usr/lib/x86_64-linux-gnu/. If ythe libvirt-python package was installed from PyPi maybe uninstalling it and reinstalling or installing a different version will fix it... Sorry for not being more helpful, but Sean and I are from the Cinder team, and all this are specific to the Nova side, so we are basically guessing here... Cheers, Gorka.
Regards,
Sneha Rai
-----Original Message----- From: Sean McGinnis [mailto:sean.mcginnis@gmx.com] Sent: Tuesday, May 14, 2019 1:33 AM To: RAI, SNEHA <sneha.rai@hpe.com> Cc: Gorka Eguileor <geguileo@redhat.com>; openstack-dev@lists.openstack.org Subject: Re: Help needed to Support Multi-attach feature
On Fri, May 10, 2019 at 04:51:07PM +0000, RAI, SNEHA wrote:
Thanks Gorka for your response.
I have changed the version of libvirt and qemu on my host and I am able to move past the previous error mentioned in my last email.
Current versions of libvirt and qemu:
root@CSSOSBE04-B09:/etc# libvirtd --version libvirtd (libvirt) 1.3.1
root@CSSOSBE04-B09:/etc# kvm --version QEMU emulator version 2.5.0
(Debian 1:2.5+dfsg-5ubuntu10.36), Copyright (c) 2003-2008 Fabrice
Bellard
Also, I made a change in /etc/nova/nova.conf and set virt_type=qemu. Earlier it was set to kvm.
I restarted all nova services post the changes but I can see one nova service was disabled and state was down.
Not sure if it is related or not, but I don't believe you want to change virt_type t0 "qemu". That should stay "kvm".
participants (3)
-
Gorka Eguileor
-
RAI, SNEHA
-
Sean McGinnis