If you are using Mellanox (Nvidia) NIC with inbox driver the default mode is ipoib Enhanced mode  [1]

Which support for acceleration for the IPoIB 

  

 

 

[1] - https://www.spinics.net/lists/linux-rdma/msg46802.html

 

From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com>
Sent: Monday, June 6, 2022 4:52 PM
Cc: openstack-discuss <openstack-discuss@lists.openstack.org>
Subject: Re: Using infiniband for openstack network communications

 

You don't often get email from noonedeadpunk@gmail.com. Learn why this is important

External email: Use caution opening links or attachments

 

Ah, ok.

 

Well, you can use infiniband for control plane only with IPoIB. There's no RDMA support if that's what you're asking.

 

ÐÎ, 6 ÉÀÎ. 2022 Ç., 15:36 A Monster <amonster369@gmail.com>:

What I wanted to actually do is not what is done in this link https://satishdotpatel.github.io/HPC-on-openstack/ , because here vm's are given the possibility to use the infiniband port through Virtual functions, but in my case, I want to use infiniband for openstack management, compute and storage networks, to get better performance than when using ethernet and rg45 cables. so I'm wondering if it's feasible and whether it's a good thing or not.

Thank you.

 

On Mon, 6 Jun 2022 at 14:15, A Monster <amonster369@gmail.com> wrote:

Thank you for your reply, I'll check the link you've sent to me immediately.