[Openstack] HPC with Openstack?

Cole coleton at gmail.com
Sat Dec 3 22:08:55 UTC 2011

First and foremost:

With Numa and lightweight container technology (LXC / OpenVZ) you can
achieve very close to real hardware performance for certain HPC
applications.  The problem with technologies like LXC is there isn't a ton
of logic to address the cpu affinity that other hypervisors offer (which
generally wouldn't be ideal for HPC).

On the interconnect side.  There are plenty of open-mx(
http://open-mx.gforge.inria.fr/) HPC applications running on everything
from single channel 1 gig to bonded 10 gig.

This is an area I'm personally interested in and have done some testing and
will be doing more.  If you are going to try HPC with ethernet, Arista
makes the lowest latency switches in the business.


On Sat, Dec 3, 2011 at 11:11 AM, Tim Bell <Tim.Bell at cern.ch> wrote:

> At CERN, we are also faced with similar thoughts as we look to the cloud
> on how to match the VM creation performance (typically O(minutes)) with the
> required batch job system rates for a single program (O(sub-second)).
> Data locality to aim that the job runs close to the source data makes this
> more difficult along with fair share to align the priority of the jobs to
> achieve the agreed quota between competing requests for limited and shared
> resource.  The classic IaaS model of 'have credit card, will compute' does
> not apply for some private cloud use cases/users.
> We would be interested to discuss further with other sites.  There is
> further background from OpenStack Boston at http://vimeo.com/31678577.
> Tim
> tim.bell at cern.ch
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111203/87b5c09a/attachment.html>

More information about the Openstack mailing list