Re: Using infiniband for openstack network communications
Thank you for your reply, I'll check the link you've sent to me immediately.
What I wanted to actually do is not what is done in this link https://satishdotpatel.github.io/HPC-on-openstack/ , because here vm's are given the possibility to use the infiniband port through Virtual functions, but in my case, I want to use infiniband for openstack management, compute and storage networks, to get better performance than when using ethernet and rg45 cables. so I'm wondering if it's feasible and whether it's a good thing or not. Thank you.
On Mon, 6 Jun 2022 at 14:15, A Monster amonster369@gmail.com wrote:
Thank you for your reply, I'll check the link you've sent to me immediately.
Ah, ok.
Well, you can use infiniband for control plane only with IPoIB. There's no RDMA support if that's what you're asking.
пн, 6 июн. 2022 г., 15:36 A Monster amonster369@gmail.com:
What I wanted to actually do is not what is done in this link https://satishdotpatel.github.io/HPC-on-openstack/ , because here vm's are given the possibility to use the infiniband port through Virtual functions, but in my case, I want to use infiniband for openstack management, compute and storage networks, to get better performance than when using ethernet and rg45 cables. so I'm wondering if it's feasible and whether it's a good thing or not. Thank you.
On Mon, 6 Jun 2022 at 14:15, A Monster amonster369@gmail.com wrote:
Thank you for your reply, I'll check the link you've sent to me immediately.
If you are using Mellanox (Nvidia) NIC with inbox driver the default mode is ipoib Enhanced mode [1]
Which support for acceleration for the IPoIB
[1] - https://www.spinics.net/lists/linux-rdma/msg46802.html
From: Dmitriy Rabotyagov noonedeadpunk@gmail.com Sent: Monday, June 6, 2022 4:52 PM Cc: openstack-discuss openstack-discuss@lists.openstack.org Subject: Re: Using infiniband for openstack network communications
You don't often get email from noonedeadpunk@gmail.commailto:noonedeadpunk@gmail.com. Learn why this is importanthttps://aka.ms/LearnAboutSenderIdentification External email: Use caution opening links or attachments
Ah, ok.
Well, you can use infiniband for control plane only with IPoIB. There's no RDMA support if that's what you're asking.
пн, 6 июн. 2022 г., 15:36 A Monster <amonster369@gmail.commailto:amonster369@gmail.com>: What I wanted to actually do is not what is done in this link https://satishdotpatel.github.io/HPC-on-openstack/https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsatishdotpatel.github.io%2FHPC-on-openstack%2F&data=05%7C01%7Cmoshele%40nvidia.com%7C068cf04921154326f55208da47c4496c%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637901205619237485%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PZpbgjctZOokNlUHy1dYECpr%2BD0zfQ7fkw34A8pnhYw%3D&reserved=0 , because here vm's are given the possibility to use the infiniband port through Virtual functions, but in my case, I want to use infiniband for openstack management, compute and storage networks, to get better performance than when using ethernet and rg45 cables. so I'm wondering if it's feasible and whether it's a good thing or not. Thank you.
On Mon, 6 Jun 2022 at 14:15, A Monster <amonster369@gmail.commailto:amonster369@gmail.com> wrote: Thank you for your reply, I'll check the link you've sent to me immediately.
participants (3)
-
A Monster
-
Dmitriy Rabotyagov
-
Moshe Levi