Hi Vincent, To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does). You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports). *cat /sys/module/nvme_core/parameters/multipath* On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b... might be relevent
although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol...
to true if you want to enable multipath for volumes in general
and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.vol... if you want to enforece its usage.
there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.ise...
for iSER volume. but i dont belive nova has any options related to for NVMEoF backends.
that os-brick change seam to be adressign the issue you reported below by removign the dep on multipathd
but it depend on this cinder change too https://review.opendev.org/c/openstack/cinder/+/934011
https://bugs.launchpad.net/os-brick/+bug/2085013
all fo this happend in the last 7 months or so so it should be in 2025.1
unfortunetly i do not see any documeation change related to this to explain how to properly configure this end to end
On 25/07/2025 13:00, Vincent Godin wrote:
Hello guys,
Openstack: 2025.1 OS: Ubuntu 24.04
We are having trouble configuring multipath on a NetApp backend using NVMe over TCP. After reading numerous articles on this issue, we concluded that it was necessary to operate in native multipath mode, which is enabled by default on Ubuntu 24.04, and that it is no longer necessary to keep the multipathd service active for this to work. We ran inconclusive tests by setting the "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer" variables to true in cinder.conf.
These config options come into use when you are creating an image from an image.
We also set the "volume_use_multipath variable" to true in the libvirt section of nova.conf, but without success.
This is the config option if you want to use multipathing while attaching volumes to nova VMs.
After creating an instance and a volume on a server with Openstack, when querying the subsystem with the nvme command, we only get a single path.
Which command did you use for this query? *nvme list-subsys* tells accurately how many paths we have associated with a subsystem. This is a sample output from my devstack environment using LVM+nvme_tcp configurations. *nvme list-subsys* nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91 hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9 iopolicy=numa \ +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively. *nvme list* Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme0n1 /dev/ng0n1 ee02a5d5c380ac70c8e1 Linux 0xa 1.07 GB / 1.07 GB 512 B + 0 B 6.8.0-47 You can also see that the nvme kernel driver combined the two namespaces into a single multipath device *nvme0n1* Hope that helps. Thanks Rajat Dhasmana
But when we query the backend from the same server with "nvme
discover," we do get four paths.
In the os-brick code, we see that it checks a Multipath variable that must be set to true...
I think this is purely a configuration issue, but the examples given by various manufacturers (NetApp, PureStorage, etc.) don't indicate anything in particular.
So, does anyone know the configuration to apply for multipath to work (cinder/nova)?
Thank you.