Openstack HPC storage suggestion

Satish Patel satish.txt at gmail.com
Thu Mar 3 14:58:43 UTC 2022


Thank you folks,

Your inputs are very valuable. Yes, StackHPC is a great source and whenever
I search anything about HPC i endup on their website :)  Thank you Mark for
great work. I would sure like to come and listen to the SIG meetup.

8 year ago I was part of a large HPC project where we had 2000 physical
compute nodes coupled with Infiniband fabric (no openstack) and we were
using an LSF scheduler with Luster and ZFS for backend storage.

This is what I am trying to do with openstack HPC. We built kolla-ansible
using 40 compute nodes which include 20 IB nodes which have infiniband
fabric, 10 GPU nodes and the remaining Memory nodes which have lots of
memory. My clients are students so mostly they are looking for virtualized
HPC where they can reserve resources and build their own cluster using
Slurm, Condor or Spark whatever they want. We are using all virtualized so
Infiniband mellanox configure for SRIOV passthrough to expose directly to
VM. We have a VM-to-VM infiniband network for high speed messaging.

I have put all those stuff in my blog here:
https://satishdotpatel.github.io/HPC-on-openstack/

Now I want to use storage to meet my needs, i don't know at present
because I don't know how and what application they are going to use. let's
say i start with CephFS and folk are not happy then i have to change
storage to Luster or maybe something else. Another question: what people
use from scratch for HPC ( assuming it's going to be a local disk because
of high I/O rate).



On Thu, Mar 3, 2022 at 4:33 AM Manuel Holtgrewe <zyklenfrei at gmail.com>
wrote:

> Hi,
>
> I guess it really depends on what HPC means to you ;-)
>
> Do your users schedule nova instances? Can you actually do Infiniband
> RDMA between nova VMs? Or are your users scheduling ironic instances
> via nova?
>
> We have an openstack setup based on kayobe/kolla where we:
>
> - Deploy 8 hyperconverged OpenStack nova libvirt compute+Ceph storage
> (mostly for block storage and cephfs comes in handy for shared file
> systems, e.g., for shared state in Slurm) servers.
> - Deploy 300+ bare metal compute nodes via ironic.
> - Use Slurm for the job scheduler.
> - Setup everything after the bare OS with Ansible.
> - Have login nodes and slurm scheduler etc. run in nova VMs.
> - Only have Ethernet interconnect (100/25GbE for central switches and
> recent servers and 40/10GbE for older servers).
>
> So we are using OpenStack nova+libvirt as a unified way of deploying
> virtual and physical machines and then install them as we would do
> with normal servers. That's a bit non-cloudy (as far as I have
> learned) but works really well for us.
>
> Now to your HPC storage question... that I hope to answer a bit indirectly.
>
> The main advantage for using manila with CephFS (that I can see) is
> that you get the openstack goodies of API and Horizon clients for
> managing shares. I guess this is mostly useful if you want to have
> cloud features for your HPC such as users allocating storage in
> self-service to their nova/ironic machines. We come from a classic HPC
> setting where the partitioning of the system is not done by creating
> multiple nodes/clusters by users in self-service, but rather
> administration provides a bare metal cluster with Slurm scheduler.
> Users log in to head nodes and submit jobs to the compute nodes to
> Slurm. Thus Slurm does the managing of resources and users can
> allocate single cores up to the whole cluster. So in our use case
> there would not be a major advantage of using manila for our storage
> as we would primarily have one export that gives access to the whole
> storage ;-).
>
> We currently have an old GPFS storage that we mount on all nodes via
> Ansible. We are currently migrating to using an additional, dedicated
> NVME-based ceph cluster (that is not hyperconverged with our compute)
> and that we would also mount via Ansible. As we essentially only have
> a single share on this storage managing the share with manila would be
> more trouble than work. The much more important part will be setting
> up Ceph appropriately and tuning it to perform well (e.g., using the
> IO500 benchmark as the croit people demonstrate here [1]).
>
> I guess that does not really answer your question but I hope that it
> gives useful perspective to you and maybe others.
>
> You can look at the work of the wonderful people of StackHPC that
> provide commercial services around Openstack/Kayobe/Kolla/Ironic for
> HPC setups. There are a couple of interesting videos involving their
> staff and they have very useful information on the web site as well.
> And as Mark Goddard just wrote you can have a look at the Scientific
> SIG where they are involved as well. (I'm not in any way related to
> StackHPC, I just really like their work).
>
> Best wishes,
> Manuel
>
> [1] https://croit.io/blog/ceph-performance-test-and-optimization
>
> On Thu, Mar 3, 2022 at 9:25 AM Mahendra Paipuri
> <mahendra.paipuri at cnrs.fr> wrote:
> >
> > Hello,
> >
> > We are quite interested in this too. When we were looking into existing
> > solutions, we found there has been some work in integrating LUSTRE into
> > OpenStack [1], [2]. I remember coming across some Openinfra talks on
> > developing a manila backend driver for LUSTRE. I am not quite sure if
> > this project is still ongoing. Manila already provides a backend driver
> > for GPFS [3] (unfortunately GPFS is not opensource) to readily integrate
> > it into Openstack. Manila supports GlusterFS and CephFS as well but they
> > do not have RDMA support (if I am not wrong).
> >
> > This is pretty much what we found. Would be glad explore more solutions
> > if someone knows any.
> >
> > Cheers
> >
> > -
> >
> > Mahendra
> >
> > [1]
> >
> https://docs.google.com/presentation/d/1kGRzcdVQX95abei1bDVoRzxyC02i89_m5_sOfp8Aq6o/htmlpresent
> >
> > [2]
> >
> https://www.openstack.org/videos/summits/barcelona-2016/lustre-integration-for-hpc-on-openstack-at-cambridge-and-monash
> >
> > [3] https://docs.openstack.org/manila/latest/admin/gpfs_driver.html
> >
> > On 03/03/2022 04:47, Satish Patel wrote:
> > > Folks,
> > >
> > > I built a 30 node HPC environment on openstack using mellanox
> > > infiniband nic for high speed MPI messaging. So far everything works.
> > > Now I am looking for a HPC PFS (parallel file system)  similar to
> > > Luster which I can mount on all HPC vms to run MPI jobs.
> > >
> > > I was reading on google and saw some CERN videos and they are using
> > > Ceph (cephfs) for PFS. Also i was reading CephFS + Manila is a good
> > > choice for HPC on openstack design.
> > >
> > > Does anyone have any experience with HPC storage for Openstack? Please
> > > advice or share your experience :)  Thanks in advance.
> > >
> > > ~S
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220303/07355a59/attachment-0001.htm>


More information about the openstack-discuss mailing list