[openstack-hpc] Infiniband + Openstack?

John Paul Walters jwalters at isi.edu
Wed Jan 23 21:02:24 UTC 2013


Z,

We're working on integrating IB into OpenStack via SR-IOV.  It's not quite there yet, but we expect to have it working within a month or so.  Right now, I'm unaware of anyone who has IB working inside of a VM, that's what we're developing.  To do this, you need a few things: 1) a host that supports SR-IOV, 2) a ConnectX-2 or ConnectX-3 IB card, and 3) Alpha SR-IOV-enabled firmware for the cards as well as the corresponding SR-IOV-enabled OFED.  Both of those (the firmware and OFED) need to come from Mellanox at the moment - they're not yet released.  We've so far been successful in getting RDMA working inside of the VM; however, oddly, IPoIB doesn't work (the ports are recognized as being connected).  We hope that this will get worked out in later firmware/OFED releases.

Be aware that this isn't a Quantum integration, at least not yet.  We're going to manage SR-IOV VIFs as resources and schedule accordingly.  I know that there are folks using IB for image distribution and perhaps also for block storage.  Our target is to get IB into the VM.

JP


On Jan 23, 2013, at 3:36 PM, zebra z zebra <zebra.x0r at gmail.com> wrote:

> Hi.
> 
> I've been trying to get some simple/straight answers on IB for OpenStack
> for a few days now. It could well be that I've just been posting to the
> wrong list! ;)
> 
> I'm considering a reasonably sized OpenStack deployment shortly (several
> thousand cores). One of the decisions I need to make centres around the
> type of interconnect I will use to connect the compute nodes, Swift
> objectstorage, shared storage, swift proxies et al.
> 
> I have been heavily considering FDR infiniband for this task, but, from
> what I can see, IB has only recently become something one might consider
> stable for use in OpenStack. My peers are suggesting to me that I should
> just stick with 10GbE interconnects and be happy.
> 
> In doing some background reading, it looks like somebody asked similar
> questions a year or two ago, and the general sentiment is that it'd work
> in IPoIB mode as just a very fast ethernet device, but it wouldn't really
> have any of the benefits of RDMA communication, or any such capability -
> and the issue at the time was that no python libraries had any
> understanding or concept of it.
> 
> I'd like to hear the experiences of people who have or are using IB, or
> who have experiences with IB deployments.
> 
> Thank you.
> 
> --z
> 
> 
> 
> _______________________________________________
> OpenStack-HPC mailing list
> OpenStack-HPC at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc




More information about the OpenStack-HPC mailing list