Using infiniband for openstack network communications

Moshe Levi moshele at
Mon Jun 6 20:48:56 UTC 2022

If you are using Mellanox (Nvidia) NIC with inbox driver the default mode is ipoib Enhanced mode  [1]

Which support for acceleration for the IPoIB

[1] -

From: Dmitriy Rabotyagov <noonedeadpunk at>
Sent: Monday, June 6, 2022 4:52 PM
Cc: openstack-discuss <openstack-discuss at>
Subject: Re: Using infiniband for openstack network communications

You don't often get email from noonedeadpunk at<mailto:noonedeadpunk at>. Learn why this is important<>
External email: Use caution opening links or attachments

Ah, ok.

Well, you can use infiniband for control plane only with IPoIB. There's no RDMA support if that's what you're asking.

пн, 6 июн. 2022 г., 15:36 A Monster <amonster369 at<mailto:amonster369 at>>:
What I wanted to actually do is not what is done in this link<> , because here vm's are given the possibility to use the infiniband port through Virtual functions, but in my case, I want to use infiniband for openstack management, compute and storage networks, to get better performance than when using ethernet and rg45 cables. so I'm wondering if it's feasible and whether it's a good thing or not.
Thank you.

On Mon, 6 Jun 2022 at 14:15, A Monster <amonster369 at<mailto:amonster369 at>> wrote:
Thank you for your reply, I'll check the link you've sent to me immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the openstack-discuss mailing list