[openstack-hpc] Infiniband + Openstack?

zebra z zebra zebra.x0r at gmail.com
Wed Jan 23 20:36:25 UTC 2013


Hi.

I've been trying to get some simple/straight answers on IB for OpenStack
for a few days now. It could well be that I've just been posting to the
wrong list! ;)

I'm considering a reasonably sized OpenStack deployment shortly (several
thousand cores). One of the decisions I need to make centres around the
type of interconnect I will use to connect the compute nodes, Swift
objectstorage, shared storage, swift proxies et al.

I have been heavily considering FDR infiniband for this task, but, from
what I can see, IB has only recently become something one might consider
stable for use in OpenStack. My peers are suggesting to me that I should
just stick with 10GbE interconnects and be happy.

In doing some background reading, it looks like somebody asked similar
questions a year or two ago, and the general sentiment is that it'd work
in IPoIB mode as just a very fast ethernet device, but it wouldn't really
have any of the benefits of RDMA communication, or any such capability -
and the issue at the time was that no python libraries had any
understanding or concept of it.

I'd like to hear the experiences of people who have or are using IB, or
who have experiences with IB deployments.

Thank you.

--z





More information about the OpenStack-HPC mailing list