[openstack-hpc] How are you using OpenStack for your HPC workflow? (We're looking at Ironic!)

Gilbert, Chuck cjg23 at psu.edu
Fri Jan 16 14:33:59 UTC 2015


At Penn State we are working on deploying a multi-phased effort.  Our first phase, deploys a partial Openstack and "Traditional" HPC split system, which gives us time to deploy and tune the Openstack configuration and then expand to a full Openstack governed infrastructure once Ironic is released supported in April.  The traditional HPC piece allows users that require access to new cores now, continue to work as we finalize the deployment of our research cloud.  

The two follow-on hardware/software deployments, expands the infrastructure to support bare-metal as well as virtualized workloads, and eventually rolls the remaining "Traditional HPC" systems under the same Openstack umbrella giving more control to our researchers and a choice of consumption models to meet the broad needs of our community.
________________________________
Chuck Gilbert
Systems Architect and Systems Team Lead for Advanced CyberInfrastructure
Institute for Cyberscience
The Pennsylvania State University
Phone: 814.867.4575
Email: cjg23 at psu.edu


________________________________________
From: openstack-hpc-request at lists.openstack.org [openstack-hpc-request at lists.openstack.org]
Sent: Friday, January 16, 2015 7:00 AM
To: openstack-hpc at lists.openstack.org
Subject: OpenStack-HPC Digest, Vol 21, Issue 6

Send OpenStack-HPC mailing list submissions to
        openstack-hpc at lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
or, via email, send a message with subject or body 'help' to
        openstack-hpc-request at lists.openstack.org

You can reach the person managing the list at
        openstack-hpc-owner at lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-HPC digest..."


Today's Topics:

   1. Re: How are you using OpenStack for your HPC workflow? (We're
      looking at Ironic!) (Jure Pe?ar)


----------------------------------------------------------------------

Message: 1
Date: Thu, 15 Jan 2015 15:37:17 +0100
From: Jure Pe?ar <pegasus at nerv.eu.org>
To: Tim Bell <Tim.Bell at cern.ch>
Cc: openstack-hpc at lists.openstack.org
Subject: Re: [openstack-hpc] How are you using OpenStack for your HPC
        workflow? (We're looking at Ironic!)
Message-ID: <20150115153717.09212d3a.pegasus at nerv.eu.org>
Content-Type: text/plain; charset=UTF-8

On Wed, 14 Jan 2015 06:24:31 +0000
Tim Bell <Tim.Bell at cern.ch> wrote:


> We're starting to have a look at bare metal and containers too. Using OpenStack as an overall resource allocation system to ensure all compute usage is accounted and managed is of great interest. The highlighted items above will remain but hopefully we are work with others in the community to address them (as we are with nested projects)

Take a look at Mesos (mesos.apache.org, mesosphere.com). I did some thinking on openstack+hpc topic but didn't like the conclusions I arrived at so I went shopping for alternatives. Mesos+docker looks like the best one so far.

https://jure.pecar.org/2014-12-22/spirals


--

Jure Pe?ar
http://jure.pecar.org



------------------------------

_______________________________________________
OpenStack-HPC mailing list
OpenStack-HPC at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc


End of OpenStack-HPC Digest, Vol 21, Issue 6
********************************************



More information about the OpenStack-HPC mailing list