Using infiniband for openstack network communications
Is it possible to use the infiniband port for openstack networks without having to use the infiniband port as an ethernet port ?
Yes, why not? As long as your infiniband card support sr-iov there should be no problems with that. You can also check one blog which describes the experience https://satishdotpatel.github.io/HPC-on-openstack/ вс, 5 июн. 2022 г., 15:20 A Monster <amonster369@gmail.com>:
Is it possible to use the infiniband port for openstack networks without having to use the infiniband port as an ethernet port ?
On Sun, 2022-06-05 at 15:22 +0200, Dmitriy Rabotyagov wrote:
Yes, why not? As long as your infiniband card support sr-iov there should be no problems with that.
You can also check one blog which describes the experience https://satishdotpatel.github.io/HPC-on-openstack/
well it could work but its not really a tested usecase. security groups for example (which yes i know dont work with normal sriov) more or less assume ethernet. ovs and other backends do assuem ethernet so you cant use an infinaband interface for ovs, ovn or linux bridge. neutrop ports also kind of implictly assume ethernet via the mac adress filed so you cant really use infinaband without ethernet in openstack other then via a direct passthough to a guest in which case you are not using infinaband with neturon netowrks you are using infinaband without integration with neutron netowrking api via flaovr based pci passthoug. that is what https://satishdotpatel.github.io/HPC-on-openstack/ descripbes you will notice that the passthrough_whitelist = { "vendor_id": "10de", "product_id": "1df6" } does not contain a physical_network tag which means this VF is not usable via neutron sriov ports (vnic_type=direct in the case of a vf) they are instead creating a pci alias alias = { "vendor_id":"15b3", "product_id":"101c", "device_type":"type-VF", "name":"mlx5-sriov-ib" } then requesting that in the flavor openstack flavor create --vcpus 8 --ram 16384 --disk 100 --property hw:mem_page_size='large' --property hw:cpu_policy='dedicated' --property "pci_passthrough:alias"="mlx5-sriov-ib" ib.small then when the vm is create i has a neutron network for managment openstack server create --flavor ib.small --image ubuntu_20_04 --nic net-id=private1 ib-vm1 and a passed through infinaband VF. so i i would not cinsider this to be "use the infiniband port for openstack networks" since the infinaband port is not assocated with any neutron network or modeled as a neutron port. as far as i am aware there is no support for using infinaband with a neutron port and vnic_type=direct. so can you use infinaband with openstack yes. can you use infinaband with onpenstack networking no.
вс, 5 июн. 2022 г., 15:20 A Monster <amonster369@gmail.com>:
Is it possible to use the infiniband port for openstack networks without having to use the infiniband port as an ethernet port ?
On 6/5/22 06:15, A Monster wrote:
Is it possible to use the infiniband port for openstack networks without having to use the infiniband port as an ethernet port ?
In theory you could use Infiniband with IPoIB for API/RPC control plane, and possibly for tenant virtual networks using VXLAN. I think you must use Ethernet mode for provider networks or external provider networks, but I could be mistaken. As of Stein it was not possible to use Infiniband for bare metal nodes using Ironic [0] with DHCP and PXE boot. There may have been additional work done since, however I never saw completion of the dependencies to make this work, but it might be possible using config-drive rather than PXE boot and DHCP. I have worked with Mellanox to use some Infiniband-capable NICs (MLX-ConnectX-5) with ML2/OVS, but only in Ethernet mode. These cards can also be used with DPDK using the "mlx5_core" driver. -- Dan Sneddon | Senior Principal Software Engineer dsneddon@redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter
participants (4)
-
A Monster
-
Dan Sneddon
-
Dmitriy Rabotyagov
-
Sean Mooney