[openstack-hpc] What's the state of openstack-hpc now?

me,apporc appleorchard2000 at gmail.com
Wed Mar 16 04:16:57 UTC 2016


Thank you all. From what you post i see the aspects of hpc in openstack now.

As i see, because of
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking and
https://wiki.openstack.org/wiki/Pci_passthrough it's possible to create
instances to form a hpc.
The performance is very good too.

About the management of the hpc clusters(if we have many of them), we can
use heat, or senlin later.

On Tue, Mar 15, 2016 at 9:54 PM, Blair Bethwaite <blair.bethwaite at gmail.com>
wrote:

> Hi,
>
> Apologies for top-posting but I don't intend to answer all the
> historical project points you've raised. Regarding old things floating
> around on github, your mileage may vary, but I doubt at this point you
> want to be looking at any of that in great detail. You haven't really
> explained what you mean by or want from HPC in this context, so I'm
> guessing a little based on your other questions...
>
> OpenStack is many things to different people and organisations, but at
> the software core is a very flexible infrastructure provisioning
> framework. HPC requires infrastructure (compute, network, storage),
> and OpenStack can certainly deliver it - make your deployment choices
> to suit your use-cases. A major choice would be whether you will use
> full system virtualisation or bare-metal or containers or <insert next
> trend> - that choice largely depends on your typical workloads and
> what style of cluster you want. Beyond that, compared to "typical"
> cloud hardware - faster CPUs, faster memory, faster network (probably
> with much greater east-west capacity), integration of a suitable
> parallel file-system.
>
> However, OpenStack is not a HPC management / scheduling / queuing /
> middleware system - there are lots of those already and you should
> pick one that fits your requirements and then (if it helps) run it
> atop an OpenStack cloud (it might help, e.g., if you want to run
> multiple logical clusters on the same physical infrastructure, if you
> want to mix other more traditional cloud workloads in, if you're just
> doing everything with OpenStack like the other cool kids). There are
> lots of nuances here, e.g., where one scheduler might lend itself
> better to more dynamic infrastructure (adding/removing instances),
> another might be lighter-weight for use with a Cluster-as-a-Service
> deployment model, whilst another suits a multi-user managed service
> style cluster. I'm sure there is good experience and opinion hidden on
> this list if you want to interrogate those sorts of choices more
> specifically.
>
> Most of the relevant choices you need to make with respect to running
> HPC workloads on infrastructure that is provisioned through OpenStack
> will come down to your hypervisor choices. My preference for now is to
> stick with the OpenStack community's most popular free OS and
> hypervisor (Ubuntu and KVM+Libvirt) - when I facilitated the
> hypervisor-tuning ops session at the Vancouver summit (with a bunch of
> folks interested in HPC on OpenStack) there was no-one in the room
> running a different hypervisor, though several were using RHEL. With
> the right tuning KVM can get you to within a hair's breadth of
> bare-metal performance for a wide range of CPU, memory and
> inter-process comms benchmarks, plus you can easily make use of PCI
> passthrough for latency sensitive or "difficult" devices like
> NICs/HCAs and GPGPUs. And the "right tuning" is not really some arcane
> knowledge, it's mainly about exposing host CPU capabilities, pinning
> vCPUs to pCPUs, and tuning or pinning and exposing NUMA topology -
> most of this is supported directly through OpenStack-native features
> now.
>
> To answer the GPU question more explicitly - yes you can do this.
> Mainly you need to ensure you're getting compatible hardware (GPU and
> relevant motherboard components) - most of the typical GPGPU choices
> (e.g. K80, K40, M60) will work, and you should probably be wary of
> PCIe switches unless you know exactly what you're doing (recommend
> trying before buying). At the OpenStack level you just define the PCI
> devices you want OpenStack Nova to provision and you can then define
> custom instance-types/flavors that will get a GPU passed through.
> Similar things go for networking.
>
> Lastly, just because you can do this doesn't make it a good idea...
> OpenStack is complex, HPC systems are complex, layering one
> complicated thing on another is a good way to create tricky problems
> that hide in the interface between the two layers. So make sure you're
> gaining something from having OpenStack in the mix here.
>
> HTH,
> Blair
>
> On 15 March 2016 at 23:00,  <openstack-hpc-request at lists.openstack.org>
> wrote:
> > Message: 1
> > Date: Tue, 15 Mar 2016 19:05:38 +0800
> > From: "me,apporc" <appleorchard2000 at gmail.com>
> > To: openstack-hpc at lists.openstack.org
> > Subject: [openstack-hpc] What's the state of openstack-hpc now?
> > Message-ID:
> >         <CAOBTi0sftGTG-fscM-C5wLu6bTgZMaLaM2eXBJpa0a=
> vkPDusg at mail.gmail.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Hi, all
> >
> > I found this etherpad[1] which was created long time ago, inside which
> > there are some blueprints: support-heterogeneous-archs[2],
> > heterogeneous-instance-types[3]
> > and schedule-instances-on-heterogeneous-architectures[4] .
> > But those blueprints had been obselete since year 2014, and some of its
> > patches were abandoned.
> > There however is a forked branch github[5] or launchpad[6], which is
> > diverged far away from nova/trunk, and not updated since 2014 too.
> >
> > Is that we just abandoned those blueprints in openstack or else?
> >
> > Besides, there is a CaaS[7] project called Senlin[8], which refered to
> the
> > word "HPC" in its wiki. But it seems like not really related. "Cluster"
> can
> > mean many things, but hpc is some kind different.
> >
> > I can not get the status of GPU support in nova. As the case of network,
> > SR-IOV[9] seems ok. For storage, i don't know what the word "mi2" means
> in
> > etherpad[1].
> >
> > According to what i got above, it seems we can not use hpc in openstack
> > now. But there are some videos here[10], here[11] and here[12].Since we
> can
> > not get GPU in nova instance, are they just building traditional hpcs
> > without GPU?
> >
> >
> > I need more information, thanks in advance.
> >
> > 1. https://etherpad.openstack.org/p/HVHsTqOQGc
> > 2.
> https://blueprints.launchpad.net/nova/+spec/support-heterogeneous-archs
> > 3.
> https://blueprints.launchpad.net/nova/+spec/heterogeneous-instance-types
> > 4.
> >
> https://blueprints.launchpad.net/nova/+spec/schedule-instances-on-heterogeneous-architectures
> > 5. https://github.com/usc-isi/nova
> > 6. https://code.launchpad.net/~usc-isi/nova/hpc-trunk
> > 7. https://wiki.openstack.org/wiki/CaaS
> > 8. https://wiki.openstack.org/wiki/Senlin
> > 9. https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
> > 10.
> >
> https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/openstack-in-hpc-operations-a-campus-perspective
> > 11.
> >
> https://www.openstack.org/summit/tokyo-2015/videos/presentation/hpc-on-openstack-use-cases
> > 12.
> >
> https://www.openstack.org/summit/tokyo-2015/videos/presentation/canonical-hpc-and-openstack-from-real-experience
> > --
> > Regards,
> > apporc
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> http://lists.openstack.org/pipermail/openstack-hpc/attachments/20160315/31820640/attachment-0001.html
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > OpenStack-HPC mailing list
> > OpenStack-HPC at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
> >
> >
> > End of OpenStack-HPC Digest, Vol 30, Issue 2
> > ********************************************
>
>
>
> --
> Cheers,
> ~Blairo
>



-- 
Regards,
apporc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-hpc/attachments/20160316/3cc4b79b/attachment.html>


More information about the OpenStack-HPC mailing list