Hello, Here are some of the results on the host. An instance is launched by Openstack on the compute nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF if we have a look in the subsystem nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme0 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live I have only one path I disconnect the subsystem manually nvme disconnect -n nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 NQN:nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 disconnected 1 controller(s) I reconnect to the subsystem with a manual command nvme connect-all -t tcp -a 10.10.186.3 nvme list Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n2 /dev/ng0n2 81O3QJXiLzBDAAAAAAAH NetApp ONTAP Controller 0x2 16.11 GB / 16.11 GB 4 KiB + 0 B FFFFFFFF And if we look at the subsystem nvme list-subsys nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.ec2c63655c3d11f0a40ad039eaba99f2:subsystem.openstack-79f1de4a-6645-4b47-9377-f06db6c2e0b5 hostnqn=nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 iopolicy=round-robin \ +- nvme5 tcp traddr=10.10.184.3,trsvcid=4420,src_addr=10.10.184.33 live +- nvme4 tcp traddr=10.10.186.3,trsvcid=4420,src_addr=10.10.186.33 live +- nvme3 tcp traddr=10.10.184.4,trsvcid=4420,src_addr=10.10.184.33 live +- nvme2 tcp traddr=10.10.186.4,trsvcid=4420,src_addr=10.10.186.33 live As you can see, i have four paths Configuration details about multipath : - in nova.conf [libvirt] volume_use_multipath = True - in cinder.conf [DEFAULT] target_protocol = nvmet_tcp ... [netapp-backend] use_multipath_for_image_xfer = True netapp_storage_protocol = nvme ... /sys/module/nvme_core/parameters/multipath cat /sys/module/nvme_core/parameters/multipath Y nova-compute.log grep -i get_connector_properties /var/log/kolla/nova/nova-compute.log 2025-07-29 14:09:51.553 7 DEBUG os_brick.initiator.connectors.lightos [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9 dsc: get_connector_properties /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:115 2025-07-29 14:09:51.553 7 DEBUG os_brick.utils [None req-faf2b0ca-0709-4a70-8302-fa90ad293fd3 4e2ddaf17ee747f2a1f03a392943f80a cb513debb0834ec5b6588356a960bad9 - - default default] <== get_connector_properties: return (30ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '10.10.52.161', 'host': 'pkc-dcp-cpt-03', *'multipath': True, 'enforce_multipath': True*, 'initiator': 'iqn.2004-10.com.ubuntu:01:d0bb7aa9bcf1', 'do_local_attach': False, 'nvme_hostid': '5ca8b6d2-aa7d-42d8-bf74-c18484fab68c', 'system uuid': '31343550-3939-5a43-4a44-305930304c48', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:629788a4-04c6-547c-9121-8d7a39c17fe9', *'nvme_native_multipath': True*, 'found_dsc': '', 'host_ips': ['10.20.128.33', '10.10.184.33', '10.10.186.33', '10.10.52.161', '10.10.22.161', '10.234.2.161', '10.10.50.161', '172.17.0.1', 'fe80::7864:3eff:fe13:5e1f', 'fe80::fc16:3eff:fe7f:3430', 'fe80::4c20:48ff:fe0f:2660']} trace_logging_wrapper /var/lib/kolla/venv/lib/python3.12/site-packages/os_brick/utils.py:204 multipathd systemctl status multipathd.service ○ multipathd.service Loaded: masked (Reason: Unit multipathd.service is masked.) Active: inactive (dead) If you can see some reason to explain why openstack connect to the subsystem only with one path !!! Thanks Le lun. 28 juil. 2025 à 10:35, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Also to verify if the right value is being passed, we can check for the following log entries in nova-compute logs,
<== get_connector_properties ==> get_connector_properties
The dict should contain 'multipath': True, if the value is configured correctly.
On Mon, Jul 28, 2025 at 2:03 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:
On Sun, Jul 27, 2025 at 9:25 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello Rajat,
This parameter, */sys/module/nvme_core/parameters/multipath *is set to yes. We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova.
Good to know all the core things are in place. To configure it in OpenStack and I'm referring to a devstack environment, you mentioned setting the ``volume_use_multipath`` in nova.conf however, in a devstack environment, there is a specific nova-cpu.conf file specifically for the compute service. Did you try setting the config option there?
When I request a volume after an instance deployment with the command *nvme list-subsys* i have only one path (one namespace). When i do a nvme discover i have four
Thanks for your help
Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,
To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath*
On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem.
This is a sample output from my devstack environment using LVM+nvme_tcp configurations.
*nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live
You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.
*nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47
You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1*
Hope that helps.
Thanks Rajat Dhasmana
discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable
But when we query the backend from the same server with "nvme that
must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.