Coming from the HPC world, I have a moderate experience with IB. I have to say that while the performance are awesome when everything work as expected, even in IBoIP mode, in my very personal experience Infiniband networks is not as reliable as ethernet. You will have to fight against hba/switch incompatibilites, cables to replace because all of a sudden they stop running at full speed etc... However, I would be glad if someone could share other, opposite experiences. .a. On Thu, Jan 24, 2013 at 9:16 AM, zebra z zebra <zebra.x0r@gmail.com> wrote:
Hi.
So, we've had a further idea for our build out process.
We'd like a "sanity check" from the list to see if it makes sense.
Based upon what JP has said below, it gives us confidence that, in time, things will work "end to end" from an OFED + RDMA + SRV-IO perspective (and IPoIB will work too, one day). Currently though, we aren't there yet.
So, to that end, we've had an idea.
What if, for the time being, we run the ConnectX-3 IB cards in Ethernet personality, reap all the benefits of a 40Gbit/sec ethernet plumbing environment, and then, when the time comes, and OpenStack is at the point where it'll really hum nicely in terms of full IB integration, we can simply flash the personality of the IB card to IB mode and run with that.
Seems like a sensible way to wade into Infiniband, but have a functional system from the get go, then have all the benefits of IB later.
Comments/thoughts?
Thank you, all.
z
On 24/01/13 7:02 AM, "John Paul Walters" <jwalters@isi.edu> wrote:
Z,
We're working on integrating IB into OpenStack via SR-IOV. It's not quite there yet, but we expect to have it working within a month or so. Right now, I'm unaware of anyone who has IB working inside of a VM, that's what we're developing. To do this, you need a few things: 1) a host that supports SR-IOV, 2) a ConnectX-2 or ConnectX-3 IB card, and 3) Alpha SR-IOV-enabled firmware for the cards as well as the corresponding SR-IOV-enabled OFED. Both of those (the firmware and OFED) need to come from Mellanox at the moment - they're not yet released. We've so far been successful in getting RDMA working inside of the VM; however, oddly, IPoIB doesn't work (the ports are recognized as being connected). We hope that this will get worked out in later firmware/OFED releases.
Be aware that this isn't a Quantum integration, at least not yet. We're going to manage SR-IOV VIFs as resources and schedule accordingly. I know that there are folks using IB for image distribution and perhaps also for block storage. Our target is to get IB into the VM.
JP
On Jan 23, 2013, at 3:36 PM, zebra z zebra <zebra.x0r@gmail.com> wrote:
Hi.
I've been trying to get some simple/straight answers on IB for OpenStack for a few days now. It could well be that I've just been posting to the wrong list! ;)
I'm considering a reasonably sized OpenStack deployment shortly (several thousand cores). One of the decisions I need to make centres around the type of interconnect I will use to connect the compute nodes, Swift objectstorage, shared storage, swift proxies et al.
I have been heavily considering FDR infiniband for this task, but, from what I can see, IB has only recently become something one might consider stable for use in OpenStack. My peers are suggesting to me that I should just stick with 10GbE interconnects and be happy.
In doing some background reading, it looks like somebody asked similar questions a year or two ago, and the general sentiment is that it'd work in IPoIB mode as just a very fast ethernet device, but it wouldn't really have any of the benefits of RDMA communication, or any such capability - and the issue at the time was that no python libraries had any understanding or concept of it.
I'd like to hear the experiences of people who have or are using IB, or who have experiences with IB deployments.
Thank you.
--z
_______________________________________________ OpenStack-HPC mailing list OpenStack-HPC@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
_______________________________________________ OpenStack-HPC mailing list OpenStack-HPC@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
-- antonio.s.messina@gmail.com GC3: Grid Computing Competence Center http://www.gc3.uzh.ch/ University of Zurich Winterthurerstrasse 190 CH-8057 Zurich Switzerland