Openstack HPC Infiniband question

Satish Patel satish.txt at gmail.com
Wed Feb 16 04:31:00 UTC 2022


Hi all,

I am playing with HPC on openstack cloud deployment and I have a
Mellanox infiniband nic card. I have a couple of deployment questions
regarding the infiniband network. I am new to ib so excuse me if i ask
noob questions.

I have configured Mellanox for sriov and created a flavor with the
property pci_passthrough:alias='mlx5-sriov-ib:1' to expose VF to my
instance. so far so good and i am able to see the ib interface inside
my vm and its active. (I am running SM inside infiniband HW switch)

root at ib-vm:~# ethtool -i ibs5
driver: mlx5_core[ib_ipoib]
version: 5.5-1.0.3
firmware-version: 20.28.1002 (MT_0000000222)
expansion-rom-version:
bus-info: 0000:00:05.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

I didn't configure any ipaddr on the ibs5 interface etc. For testing
purposes I have compiled mpirun hello world program to POC my
infiniband network between two instances and I am able to successfully
run the mpi sample program.

Somewhere i read about neutron-mellanox agent to setup IPoIB for
segmentation etc but not very sure that how complex it and what are
the advantage here over just using simple passthru of SR-IOV

Is this the correct way to set up an HPC cluster using openstack or is
there a better way to design HPC on openstack?



More information about the openstack-discuss mailing list