Also to verify if the right value is being passed, we can check for the following log entries in nova-compute logs,

<== get_connector_properties
==> get_connector_properties

The dict should contain 'multipath': True, if the value is configured correctly.

On Mon, Jul 28, 2025 at 2:03 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:


On Sun, Jul 27, 2025 at 9:25 PM Vincent Godin <vince.mlist@gmail.com> wrote:
Hello Rajat,

This parameter, /sys/module/nvme_core/parameters/multipath is set to yes. 
We can manualy mount volumes with multipath from the same host. The real question is how to configure it the right way in Openstack Cinder and/or Nova.

Good to know all the core things are in place.
To configure it in OpenStack and I'm referring to a devstack environment, you mentioned setting the ``volume_use_multipath`` in nova.conf however,
in a devstack environment, there is a specific nova-cpu.conf file specifically for the compute service. Did you try setting the config option there?
 
When I request a volume after an instance deployment with the command  nvme list-subsys i have only one path (one namespace).
When i do a nvme discover i have four

Thanks for your help

Le ven. 25 juil. 2025 à 22:40, Rajat Dhasmana <rdhasman@redhat.com> a écrit :
Hi Vincent,

To set the context right, Currently os-brick only supports nvme native multipathing (ANA) and not using dm-multipath/multipathd (as iSCSI and FC does).
You can use this command to see if your host supports NVMe native multipathing or not (which you mentioned that your OS already supports).
cat /sys/module/nvme_core/parameters/multipath

On Fri, Jul 25, 2025 at 6:42 PM Sean Mooney <smooney@redhat.com> wrote:
https://github.com/openstack/os-brick/commit/8d919696a9f1b1361f00aac7032647b5e1656082
might be relevent

although from a nova point of view you shoudl set
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.volume_use_multipath

to true if you want to enable multipath for volumes in general

and
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.volume_enforce_multipath
if you want to enforece its usage.


there are some other multip path partemr like
https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.iser_use_multipath

for iSER volume. but i dont belive nova has any options related to for
NVMEoF backends.


that os-brick change seam to be adressign the issue you reported below
by removign the dep on multipathd

but it depend on this cinder change too
https://review.opendev.org/c/openstack/cinder/+/934011

https://bugs.launchpad.net/os-brick/+bug/2085013

all fo this happend in the last 7 months or so so it should be in 2025.1


unfortunetly i do not see any documeation change related to this to
explain how to properly configure this end to end



On 25/07/2025 13:00, Vincent Godin wrote:
> Hello guys,
>
> Openstack: 2025.1
> OS: Ubuntu 24.04
>
> We are having trouble configuring multipath on a NetApp backend using
> NVMe over TCP.
> After reading numerous articles on this issue, we concluded that it
> was necessary to operate in native multipath mode, which is enabled by
> default on Ubuntu 24.04, and that it is no longer necessary to keep
> the multipathd service active for this to work.
> We ran inconclusive tests by setting the
> "use_multipath_for_image_xfer" and "enforce_multipath_for_image_xfer"
> variables to true in cinder.conf.

These config options come into use when you are creating an image from an image.
 
> We also set the "volume_use_multipath variable" to true in the libvirt
> section of nova.conf, but without success.
>

This is the config option if you want to use multipathing while attaching volumes to nova VMs.
 
> After creating an instance and a volume on a server with Openstack,
> when querying the subsystem with the nvme command, we only get a
> single path.

Which command did you use for this query?
nvme list-subsys tells accurately how many paths we have associated with a subsystem.

This is a sample output from my devstack environment using LVM+nvme_tcp configurations.

nvme list-subsys
nvme-subsys0 - NQN=nqn.nvme-subsystem-1-465ef04f-a31a-4a18-92b1-33a72e811b91
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:122a7426-007d-944a-9431-bb221a8410f9
               iopolicy=numa
\
 +- nvme1 tcp traddr=10.0.2.15,trsvcid=4420,src_addr=10.0.2.15 live
 +- nvme0 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live

You can see that the two namespaces, nvme0 and nvme1 are accessible via paths 127.0.0.1 and 10.0.2.15 respectively.

nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme0n1          /dev/ng0n1            ee02a5d5c380ac70c8e1 Linux                                    0xa          1.07  GB /   1.07  GB    512   B +  0 B   6.8.0-47

You can also see that the nvme kernel driver combined the two namespaces into a single multipath device nvme0n1
 
Hope that helps.

Thanks
Rajat Dhasmana

> But when we query the backend from the same server with "nvme
> discover," we do get four paths.
>
> In the os-brick code, we see that it checks a Multipath variable that
> must be set to true...
>
> I think this is purely a configuration issue, but the examples given
> by various manufacturers (NetApp, PureStorage, etc.) don't indicate
> anything in particular.
>
> So, does anyone know the configuration to apply for multipath to work
> (cinder/nova)?
>
> Thank you.