[Cinder] pbm configuring multipath on NVMeoF
Hello guys, Openstack: 2025.1 OS: Ubuntu 24.04 We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf. We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success. After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path. But when we query the backend from the same server with "nvme discover," we do get four paths. In the os-brick code, we see that it checks a Multipath variable that must be set to true... I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular. So, does anyone know the configuration to apply for multipath to work (cinder/nova)? Thank you.
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent although from a nova point of view you shoudl set https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... to true if you want to enable multipath for volumes in general and https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage. there are some other multip path partemr like https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise... for iSER volume. but i dont belive nova has any options related to for NVMEoF backends. that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011 https://bugs.launchpad.net/os-brick/+bug/2085013 all fo this happend in the last 7 months or so so it should be in 2025.1 unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf. We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path. But when we query the backend from the same server with "nvme discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable that must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.
Hi Vincent, To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath* On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem. This is a sample output from my devstack environment using LVM+nvme_tcp configurations. *nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91 hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively. *nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47 You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1* Hope that helps. Thanks Rajat Dhasmana
But when we query the backend from the same server with "nvme
discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable that must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.
Hello Rajat, This parameter, */sys/module/nvme_core/parameters/multipath *is set to yes. We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova. When I request a volume after an instance deployment with the command *nvme list-subsys* i have only one path (one namespace). When i do a nvme discover i have four Thanks for your help Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,
To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath*
On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem.
This is a sample output from my devstack environment using LVM+nvme_tcp configurations.
*nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live
You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.
*nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47
You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1*
Hope that helps.
Thanks Rajat Dhasmana
But when we query the backend from the same server with "nvme
discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable that must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.
On Sun, Jul 27, 2025 at 9:25 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello Rajat,
This parameter, */sys/module/nvme_core/parameters/multipath *is set to yes. We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova.
Good to know all the core things are in place. To configure it in OpenStack and I'm referring to a devstack environment, you mentioned setting the ``volume_use_multipath`` in nova.conf however, in a devstack environment, there is a specific nova-cpu.conf file specifically for the compute service. Did you try setting the config option there?
When I request a volume after an instance deployment with the command *nvme list-subsys* i have only one path (one namespace). When i do a nvme discover i have four
Thanks for your help
Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,
To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath*
On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem.
This is a sample output from my devstack environment using LVM+nvme_tcp configurations.
*nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live
You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.
*nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47
You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1*
Hope that helps.
Thanks Rajat Dhasmana
But when we query the backend from the same server with "nvme
discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable that must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.
Also to verify if the right value is being passed, we can check for the following log entries in nova-compute logs, <== get_connector_properties ==> get_connector_properties The dict should contain 'multipath': True, if the value is configured correctly. On Mon, Jul 28, 2025 at 2:03 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:
On Sun, Jul 27, 2025 at 9:25 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello Rajat,
This parameter, */sys/module/nvme_core/parameters/multipath *is set to yes. We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova.
Good to know all the core things are in place. To configure it in OpenStack and I'm referring to a devstack environment, you mentioned setting the ``volume_use_multipath`` in nova.conf however, in a devstack environment, there is a specific nova-cpu.conf file specifically for the compute service. Did you try setting the config option there?
When I request a volume after an instance deployment with the command *nvme list-subsys* i have only one path (one namespace). When i do a nvme discover i have four
Thanks for your help
Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,
To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath*
On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem.
This is a sample output from my devstack environment using LVM+nvme_tcp configurations.
*nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live
You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.
*nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47
You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1*
Hope that helps.
Thanks Rajat Dhasmana
But when we query the backend from the same server with "nvme
discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable that must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.
Hello, Here are some of the results on the host. An instance is launched by Openstack on the compute nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF if we have a look in the subsystem nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme0 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live I have only one path I disconnect the subsystem manually nvme disconnect -n nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 NQN:nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 disconnected 1 controller(s) I reconnect to the subsystem with a manual command nvme connect-all -t tcp -a 10.10.186.3 nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n2 /dev/ng0n2 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF And if we look at the subsystem nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme5 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live +- nvme4 tcp traddr=10.10.186.3,trsvcid=4420,src_addr=10.10.186.33 live +- nvme3 tcp traddr=10.10.184.4,trsvcid=4420,src_addr=10.10.184.33 live +- nvme2 tcp traddr=10.10.186.4,trsvcid=4420,src_addr=10.10.186.33 live As you can see, i have four paths Configuration details about multipath : - in nova.conf [libvirt] volume_use_multipath = True - in cinder.conf [DEFAULT] target_protocol = nvmet_tcp ... [netapp-backend] use_multipath_for_image_xfer = True netapp_storage_protocol = nvme ... /sys/module/nvme_core/parameters/multipath cat /sys/module/nvme_core/parameters/multipath Y nova-compute.log grep -i get_connector_properties /var/log/kolla/nova/nova-compute.log 2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 dsc: get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:115 2025-07-29 14:09:51.553 7 DEBUG os_brick.utils [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== get_connector_properties: return (30ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '10.10.52.161', 'host': 'pkc-dcp-cpt-03', *'multipath': True, 'enforce_multipath': True*, 'initiator': 'iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1', 'do_local_attach': False, 'nvme_hostid': '5ca8b6d2-aa7d-42d8-bf74-c18484fab68c', 'system uuid': '31343550-3939-5a43-4a44-305930304c48', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', *'nvme_native_multipath': True*, 'found_dsc': '', 'host_ips': ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660']} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204 multipathd systemctl status multipathd.service ○ multipathd.service Loaded: masked (Reason: Unit multipathd.service is masked.) Active: inactive (dead) If you can see some reason to explain why openstack connect to the subsystem only with one path !!! Thanks Le lun. 28 juil. 2025 à 10:35, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Also to verify if the right value is being passed, we can check for the following log entries in nova-compute logs,
<== get_connector_properties ==> get_connector_properties
The dict should contain 'multipath': True, if the value is configured correctly.
On Mon, Jul 28, 2025 at 2:03 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:
On Sun, Jul 27, 2025 at 9:25 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello Rajat,
This parameter, */sys/module/nvme_core/parameters/multipath *is set to yes. We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova.
Good to know all the core things are in place. To configure it in OpenStack and I'm referring to a devstack environment, you mentioned setting the ``volume_use_multipath`` in nova.conf however, in a devstack environment, there is a specific nova-cpu.conf file specifically for the compute service. Did you try setting the config option there?
When I request a volume after an instance deployment with the command *nvme list-subsys* i have only one path (one namespace). When i do a nvme discover i have four
Thanks for your help
Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,
To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath*
On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem.
This is a sample output from my devstack environment using LVM+nvme_tcp configurations.
*nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live
You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.
*nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47
You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1*
Hope that helps.
Thanks Rajat Dhasmana
discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable
But when we query the backend from the same server with "nvme that
must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.
Hello guys, Some more informations found in the nova-compute.log : -try iscsi 2025-07-29 14:09:51.523 1222 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349 2025-07-29 14:09:51.528 1222 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372 2025-07-29 14:09:51.528 1222 DEBUG oslo.privsep.daemon [-] privsep: reply[90a51cdb-5701-4339-b059-fefb0b79b7a5]: (4, ('## DO NOT EDIT OR REMOVE THIS FILE!\n## If you remove this file, the iSCSI daemon will not start.\n## If you change the InitiatorName, existing access control lists\n## may reject this initiator. The InitiatorName must be unique\n## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.\nInitiatorName=iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1\n', '')) _call_back /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_privsep/daemon.py:503 -try lightos ??? 2025-07-29 14:09:51.552 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:135 2025-07-29 14:09:51.553 7 INFO os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Current host hostNQN nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 and IP(s) are ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660'] 2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:112 2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 dsc: get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:115 -then 2025-07-29 14:09:51.553 7 DEBUG os_brick.utils [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== get_connector_properties: return (30ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '10.10.52.161', 'host': 'pkc-dcp-cpt-03', 'multipath': True, 'enforce_multipath': True, 'initiator': 'iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1', 'do_local_attach': False, 'nvme_hostid': '5ca8b6d2-aa7d-42d8-bf74-c18484fab68c', 'system uuid': '31343550-3939-5a43-4a44-305930304c48', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', 'nvme_native_multipath': True, 'found_dsc': '', 'host_ips': ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660']} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204 2025-07-29 14:09:51.554 7 DEBUG nova.virt.block_device [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] [instance: 3fcb3e36-1890-44f7-9c3c-283c05e91910] Updating existing volume attachment record: b81aea6e-f2ae-4781-8c2e-3b7f1606ba0d _volume_attach /var/lib/kolla/venv/lib/python3.12/site-packages/nova/virt/block_device.py:666 2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] ==> connect_volume: call "{'self': <os_brick.initiator.connectors.nvmeof.NVMeOFConnector object at 0x7cf65c576090>, 'connection_properties': {'target_nqn': 'nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5', 'host_nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', 'portals': [['10.10.184.3', 4420, 'tcp']], 'vol_uuid': '69da9918-7e84-4ee4-b7bb-9b50e3e6d739', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False, 'enforce_multipath': True}}" trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:177 2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.base [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Acquiring lock "connect_volume" by "os_brick.initiator.connectors.nvmeof.NVMeOFConnector.connect_volume" inner /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/base.py:68 2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.base [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Lock "connect_volume" acquired by "os_brick.initiator.connectors.nvmeof.NVMeOFConnector.connect_volume" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/base.py:73 2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Search controllers for *portals [('10.10.184.3', '4420', 'tcp', 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9')]* set_portals_controllers /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/nvmeof.py:410 Here we can see the variable portals which is a list but with only one record. Maybe there must be the 4 paths here ... After this, os-brick get the informations and mount the namespace 2025-07-29 14:09:53.690 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Device found at /sys/class/nvme-fabrics/ctl/nvme5/nvme0c5n3, using /dev/nvme0n3 get_device_by_property /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/nvmeof.py:257 2025-07-29 14:09:53.690 7 DEBUG os_brick.initiator.connectors.base [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Lock "connect_volume" "released" by "os_brick.initiator.connectors.nvmeof.NVMeOFConnector.connect_volume" :: held 0.010s inner /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/base.py:87 2025-07-29 14:09:53.690 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== connect_volume: return (9ms) {'type': 'block', 'path': '/dev/nvme0n3'} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204 2025-07-29 14:09:53.690 7 DEBUG nova.virt.libvirt.volume.nvme [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] [instance: 3fcb3e36-1890-44f7-9c3c-283c05e91910] Connecting NVMe volume with device_info {'type': 'block', 'path': '/dev/nvme0n3'} connect_volume /var/lib/kolla/venv/lib/python3.12/site-packages/nova/virt/libvirt/volume/nvme.py:44 Thanks Le mar. 29 juil. 2025 à 15:53, Vincent Godin <vince.mlist@gmail.com> a écrit :
Hello,
Here are some of the results on the host.
An instance is launched by Openstack on the compute
nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF
if we have a look in the subsystem
nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5
hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme0 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live
I have only one path I disconnect the subsystem manually
nvme disconnect -n nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 NQN:nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 disconnected 1 controller(s)
I reconnect to the subsystem with a manual command
nvme connect-all -t tcp -a 10.10.186.3
nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n2 /dev/ng0n2 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF
And if we look at the subsystem
nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5
hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme5 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live +- nvme4 tcp traddr=10.10.186.3,trsvcid=4420,src_addr=10.10.186.33 live +- nvme3 tcp traddr=10.10.184.4,trsvcid=4420,src_addr=10.10.184.33 live +- nvme2 tcp traddr=10.10.186.4,trsvcid=4420,src_addr=10.10.186.33 live
As you can see, i have four paths
Configuration details about multipath :
- in nova.conf [libvirt] volume_use_multipath = True - in cinder.conf [DEFAULT] target_protocol = nvmet_tcp ... [netapp-backend] use_multipath_for_image_xfer = True netapp_storage_protocol = nvme ...
/sys/module/nvme_core/parameters/multipath cat /sys/module/nvme_core/parameters/multipath Y
nova-compute.log grep -i get_connector_properties /var/log/kolla/nova/nova-compute.log
2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 dsc: get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:115
2025-07-29 14:09:51.553 7 DEBUG os_brick.utils [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== get_connector_properties: return (30ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '10.10.52.161', 'host': 'pkc-dcp-cpt-03', *'multipath': True, 'enforce_multipath': True*, 'initiator': 'iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1', 'do_local_attach': False, 'nvme_hostid': '5ca8b6d2-aa7d-42d8-bf74-c18484fab68c', 'system uuid': '31343550-3939-5a43-4a44-305930304c48', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', *'nvme_native_multipath': True*, 'found_dsc': '', 'host_ips': ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660']} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204
multipathd systemctl status multipathd.service ○ multipathd.service Loaded: masked (Reason: Unit multipathd.service is masked.) Active: inactive (dead)
If you can see some reason to explain why openstack connect to the subsystem only with one path !!!
Thanks
Le lun. 28 juil. 2025 à 10:35, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Also to verify if the right value is being passed, we can check for the following log entries in nova-compute logs,
<== get_connector_properties ==> get_connector_properties
The dict should contain 'multipath': True, if the value is configured correctly.
On Mon, Jul 28, 2025 at 2:03 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:
On Sun, Jul 27, 2025 at 9:25 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello Rajat,
This parameter, */sys/module/nvme_core/parameters/multipath *is set to yes. We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova.
Good to know all the core things are in place. To configure it in OpenStack and I'm referring to a devstack environment, you mentioned setting the ``volume_use_multipath`` in nova.conf however, in a devstack environment, there is a specific nova-cpu.conf file specifically for the compute service. Did you try setting the config option there?
When I request a volume after an instance deployment with the command *nvme list-subsys* i have only one path (one namespace). When i do a nvme discover i have four
Thanks for your help
Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,
To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath*
On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote: > Hello guys, > > Openstack: 2025.1 > OS: Ubuntu 24.04 > > We are having trouble configuring multipath on a NetApp backend using > NVMe over TCP. > After reading numerous articles on this issue, we concluded that it > was necessary to operate in native multipath mode, which is enabled by > default on Ubuntu 24.04, and that it is no longer necessary to keep > the multipathd service active for this to work. > We ran inconclusive tests by setting the > "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" > variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
> We also set the "volume_use_multipath variable" to true in the libvirt > section of nova.conf, but without success. >
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
> After creating an instance and a volume on a server with Openstack, > when querying the subsystem with the nvme command, we only get a > single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem.
This is a sample output from my devstack environment using LVM+nvme_tcp configurations.
*nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live
You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.
*nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47
You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1*
Hope that helps.
Thanks Rajat Dhasmana
But when we query the backend from the same server with "nvme > discover," we do get four paths. > > In the os-brick code, we see that it checks a Multipath variable that > must be set to true... > > I think this is purely a configuration issue, but the examples given > by various manufacturers (NetApp, PureStorage, etc.) don't indicate > anything in particular. > > So, does anyone know the configuration to apply for multipath to work > (cinder/nova)? > > Thank you.
On Wed, Jul 30, 2025 at 3:15 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello guys,
Some more informations found in the nova-compute.log :
-try iscsi
2025-07-29 14:09:51.523 1222 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349 2025-07-29 14:09:51.528 1222 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372 2025-07-29 14:09:51.528 1222 DEBUG oslo.privsep.daemon [-] privsep: reply[90a51cdb-5701-4339-b059-fefb0b79b7a5]: (4, ('## DO NOT EDIT OR REMOVE THIS FILE!\n## If you remove this file, the iSCSI daemon will not start.\n## If you change the InitiatorName, existing access control lists\n## may reject this initiator. The InitiatorName must be unique\n## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.\nInitiatorName=iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1\n', '')) _call_back /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_privsep/daemon.py:503
-try lightos ???
2025-07-29 14:09:51.552 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:135 2025-07-29 14:09:51.553 7 INFO os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Current host hostNQN nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 and IP(s) are ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660'] 2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:112 2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 dsc: get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:115
-then
2025-07-29 14:09:51.553 7 DEBUG os_brick.utils [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== get_connector_properties: return (30ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '10.10.52.161', 'host': 'pkc-dcp-cpt-03', 'multipath': True, 'enforce_multipath': True, 'initiator': 'iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1', 'do_local_attach': False, 'nvme_hostid': '5ca8b6d2-aa7d-42d8-bf74-c18484fab68c', 'system uuid': '31343550-3939-5a43-4a44-305930304c48', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', 'nvme_native_multipath': True, 'found_dsc': '', 'host_ips': ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660']} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204
'multipath': True, 'enforce_multipath': True This shows that multipath configuration is set correctly. It would be good to search for this log entry[1] in cinder-volume logs and see the *portals *field to verify how many portals does the netapp nvme driver returns *Initialize connection info:* https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/netapp...
2025-07-29 14:09:51.554 7 DEBUG nova.virt.block_device [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] [instance: 3fcb3e36-1890-44f7-9c3c-283c05e91910] Updating existing volume attachment record: b81aea6e-f2ae-4781-8c2e-3b7f1606ba0d _volume_attach /var/lib/kolla/venv/lib/python3.12/site-packages/nova/virt/block_device.py:666 2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] ==> connect_volume: call "{'self': <os_brick.initiator.connectors.nvmeof.NVMeOFConnector object at 0x7cf65c576090>, 'connection_properties': {'target_nqn': 'nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5', 'host_nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', 'portals': [['10.10.184.3', 4420, 'tcp']], 'vol_uuid': '69da9918-7e84-4ee4-b7bb-9b50e3e6d739', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False, 'enforce_multipath': True}}" trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:177
'portals': [['10.10.184.3', 4420, 'tcp']] Here we can see that there is only one portal returned by the netapp driver
2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.base [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Acquiring lock "connect_volume" by "os_brick.initiator.connectors.nvmeof.NVMeOFConnector.connect_volume" inner /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/base.py:68 2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.base [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Lock "connect_volume" acquired by "os_brick.initiator.connectors.nvmeof.NVMeOFConnector.connect_volume" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/base.py:73 2025-07-29 14:09:53.680 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Search controllers for *portals [('10.10.184.3', '4420', 'tcp', 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9')]* set_portals_controllers /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/nvmeof.py:410
Here we can see the variable portals which is a list but with only one record. Maybe there must be the 4 paths here ...
After this, os-brick get the informations and mount the namespace
2025-07-29 14:09:53.690 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Device found at /sys/class/nvme-fabrics/ctl/nvme5/nvme0c5n3, using /dev/nvme0n3 get_device_by_property /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/nvmeof.py:257 2025-07-29 14:09:53.690 7 DEBUG os_brick.initiator.connectors.base [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] Lock "connect_volume" "released" by "os_brick.initiator.connectors.nvmeof.NVMeOFConnector.connect_volume" :: held 0.010s inner /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/base.py:87 2025-07-29 14:09:53.690 7 DEBUG os_brick.initiator.connectors.nvmeof [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== connect_volume: return (9ms) {'type': 'block', 'path': '/dev/nvme0n3'} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204 2025-07-29 14:09:53.690 7 DEBUG nova.virt.libvirt.volume.nvme [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] [instance: 3fcb3e36-1890-44f7-9c3c-283c05e91910] Connecting NVMe volume with device_info {'type': 'block', 'path': '/dev/nvme0n3'} connect_volume /var/lib/kolla/venv/lib/python3.12/site-packages/nova/virt/libvirt/volume/nvme.py:44
Thanks
Le mar. 29 juil. 2025 à 15:53, Vincent Godin <vince.mlist@gmail.com> a écrit :
Hello,
Here are some of the results on the host.
An instance is launched by Openstack on the compute
nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF
if we have a look in the subsystem
nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5
hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme0 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live
I have only one path I disconnect the subsystem manually
nvme disconnect -n nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 NQN:nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 disconnected 1 controller(s)
I reconnect to the subsystem with a manual command
nvme connect-all -t tcp -a 10.10.186.3
nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n2 /dev/ng0n2 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF
And if we look at the subsystem
nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5
hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme5 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live +- nvme4 tcp traddr=10.10.186.3,trsvcid=4420,src_addr=10.10.186.33 live +- nvme3 tcp traddr=10.10.184.4,trsvcid=4420,src_addr=10.10.184.33 live +- nvme2 tcp traddr=10.10.186.4,trsvcid=4420,src_addr=10.10.186.33 live
As you can see, i have four paths
Configuration details about multipath :
- in nova.conf [libvirt] volume_use_multipath = True - in cinder.conf [DEFAULT] target_protocol = nvmet_tcp ... [netapp-backend] use_multipath_for_image_xfer = True netapp_storage_protocol = nvme ...
/sys/module/nvme_core/parameters/multipath cat /sys/module/nvme_core/parameters/multipath Y
nova-compute.log grep -i get_connector_properties /var/log/kolla/nova/nova-compute.log
2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 dsc: get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:115
2025-07-29 14:09:51.553 7 DEBUG os_brick.utils [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== get_connector_properties: return (30ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '10.10.52.161', 'host': 'pkc-dcp-cpt-03', *'multipath': True, 'enforce_multipath': True*, 'initiator': 'iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1', 'do_local_attach': False, 'nvme_hostid': '5ca8b6d2-aa7d-42d8-bf74-c18484fab68c', 'system uuid': '31343550-3939-5a43-4a44-305930304c48', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', *'nvme_native_multipath': True*, 'found_dsc': '', 'host_ips': ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660']} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204
multipathd systemctl status multipathd.service ○ multipathd.service Loaded: masked (Reason: Unit multipathd.service is masked.) Active: inactive (dead)
If you can see some reason to explain why openstack connect to the subsystem only with one path !!!
Thanks
Le lun. 28 juil. 2025 à 10:35, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Also to verify if the right value is being passed, we can check for the following log entries in nova-compute logs,
<== get_connector_properties ==> get_connector_properties
The dict should contain 'multipath': True, if the value is configured correctly.
On Mon, Jul 28, 2025 at 2:03 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:
On Sun, Jul 27, 2025 at 9:25 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello Rajat,
This parameter, */sys/module/nvme_core/parameters/multipath *is set to yes. We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova.
Good to know all the core things are in place. To configure it in OpenStack and I'm referring to a devstack environment, you mentioned setting the ``volume_use_multipath`` in nova.conf however, in a devstack environment, there is a specific nova-cpu.conf file specifically for the compute service. Did you try setting the config option there?
When I request a volume after an instance deployment with the command *nvme list-subsys* i have only one path (one namespace). When i do a nvme discover i have four
Thanks for your help
Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,
To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath*
On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
> > https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... > might be relevent > > although from a nova point of view you shoudl set > > https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... > > to true if you want to enable multipath for volumes in general > > and > > https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... > if you want to enforece its usage. > > > there are some other multip path partemr like > > https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise... > > for iSER volume. but i dont belive nova has any options related to > for > NVMEoF backends. > > > that os-brick change seam to be adressign the issue you reported > below > by removign the dep on multipathd > > but it depend on this cinder change too > https://review.opendev.org/c/openstack/cinder/+/934011 > > https://bugs.launchpad.net/os-brick/+bug/2085013 > > all fo this happend in the last 7 months or so so it should be in > 2025.1 > > > unfortunetly i do not see any documeation change related to this to > explain how to properly configure this end to end > > > > On 25/07/2025 13:00, Vincent Godin wrote: > > Hello guys, > > > > Openstack: 2025.1 > > OS: Ubuntu 24.04 > > > > We are having trouble configuring multipath on a NetApp backend > using > > NVMe over TCP. > > After reading numerous articles on this issue, we concluded that > it > > was necessary to operate in native multipath mode, which is > enabled by > > default on Ubuntu 24.04, and that it is no longer necessary to > keep > > the multipathd service active for this to work. > > We ran inconclusive tests by setting the > > "use_multipath_for_image_xfer" and > "enforce_multipath_for_image_xfer" > > variables to true in cinder.conf. >
These config options come into use when you are creating an image from an image.
> > We also set the "volume_use_multipath variable" to true in the > libvirt > > section of nova.conf, but without success. > > >
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
> > After creating an instance and a volume on a server with > Openstack, > > when querying the subsystem with the nvme command, we only get a > > single path. >
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem.
This is a sample output from my devstack environment using LVM+nvme_tcp configurations.
*nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live
You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.
*nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47
You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1*
Hope that helps.
Thanks Rajat Dhasmana
> But when we query the backend from the same server with "nvme > > discover," we do get four paths. > > > > In the os-brick code, we see that it checks a Multipath variable > that > > must be set to true... > > > > I think this is purely a configuration issue, but the examples > given > > by various manufacturers (NetApp, PureStorage, etc.) don't > indicate > > anything in particular. > > > > So, does anyone know the configuration to apply for multipath to > work > > (cinder/nova)? > > > > Thank you. > >
participants (3)
-
Rajat Dhasmana
-
Sean Mooney
-
Vincent Godin