[openstack-hpc] How are you using OpenStack for your HPC workflow? (We're looking at Ironic!)

John-Paul Robinson jpr at uab.edu
Tue Jan 13 21:03:57 UTC 2015


Hi Jonathon,

We are in a similar position at UAB and investigating a solution with
openstack managed hw-level provisioning as well.  I'm interested in
being able to instantiate a cluster via OpenStack on select hardware
fabrics using Ironic.

We currently have a traditional ROCKS HPC fabric augmented by Kickstart
and Puppet for additional services (NFS servers, monitoring,
documentation, project tools etc) spread out over hardware and VMs.  We
began adding VM services into the cluster to support ancillary tools and
newer use cases like web-front ends to HPC job submission for select
science domains.

We added an OpenStack fabric in 2013 to simplify such services and make
them available to end users to allow greater flexibility and autonomy.  
it has been invaluable as a way to instantiate new services and other
test configurations quickly and easily. This page gives a bit of
background and and overview.

https://dev.uabgrid.uab.edu/wiki/OpenStackPlusCeph

My goal is to have a generally available openstack fabric that can serve
our research community with Amazon-like functionality, allowing them to
build their own scale outs if desired and potentially even help us
manage the "HPC engine" (replacing the ROCKS mode with more
cloud-oriented provisioning).   There's another thread that is looking
to Puppet+Foreman to handle the hardware provisioning -- building on our
puppet effort.  We also have a Chef+Crowbar fabric that provides similar
services for our existing OpenStack deploy.

Right now we are planning our migration from Essex to Juno++.  This is
presenting it's own challenges but gives us a chance to build  from a
clean foundation.  I haven't played with Ironic yet but it is definitely
a tool I am interested in.

I'd like to be able to support VM containers for HPC jobs but can't
determine yet if these should run inside the OpenStack provisioned
cluster as traditional HPC-like jobs or allocate resources to them from
raw hardware pool or as VMs (depending on requirements).  It's probably
a matter of perspective.

~jpr

On 01/13/2015 01:13 PM, Jonathon A Anderson wrote:
> Hi, everybody!
>
> We at CU-Boulder Research Computing are looking at using OpenStack Ironic
> for our HPC cluster(s). I’ve got a sysadmin background, so my initial goal
> is to simply use OpenStack to deploy our traditional queueing system
> (Slurm), and basically use OpenStack (w/Ironic) as a replacement for
> Cobbler, xcat, perceus, hand-rolled PXE, or whatever else sysadmin-focused
> PXE-based node provisioning solution one might use. That said, once we
> have OpenStack (and have some competency with it), we’d presumably be well
> positioned to offer actual cloud-y virtualization to our users, or even
> offer baremetal instances for direct use for some of our (presumably most
> trusted) users.
>
> But how about the rest of you? Are you using OpenStack in your HPC
> workloads today? Are you doing “traditional” virtualization? Are you using
> the nova-compute baremetal driver? Are you using Ironic? Something even
> more exotic?
>
> And, most importantly, how has it worked for you?
>
> I’ve finally got some nodes to start experimenting with, so I hope to have
> some real hands-on experience with Ironic soon. (So far, I’ve only been
> able to do traditional virtualization on some workstations.)
>
> ~jonathon
>
> _______________________________________________
> OpenStack-HPC mailing list
> OpenStack-HPC at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc




More information about the OpenStack-HPC mailing list