Openstack HPC storage suggestion

Mahendra Paipuri mahendra.paipuri at cnrs.fr
Thu Mar 3 08:23:29 UTC 2022


Hello,

We are quite interested in this too. When we were looking into existing 
solutions, we found there has been some work in integrating LUSTRE into 
OpenStack [1], [2]. I remember coming across some Openinfra talks on 
developing a manila backend driver for LUSTRE. I am not quite sure if 
this project is still ongoing. Manila already provides a backend driver 
for GPFS [3] (unfortunately GPFS is not opensource) to readily integrate 
it into Openstack. Manila supports GlusterFS and CephFS as well but they 
do not have RDMA support (if I am not wrong).

This is pretty much what we found. Would be glad explore more solutions 
if someone knows any.

Cheers

-

Mahendra

[1] 
https://docs.google.com/presentation/d/1kGRzcdVQX95abei1bDVoRzxyC02i89_m5_sOfp8Aq6o/htmlpresent

[2] 
https://www.openstack.org/videos/summits/barcelona-2016/lustre-integration-for-hpc-on-openstack-at-cambridge-and-monash

[3] https://docs.openstack.org/manila/latest/admin/gpfs_driver.html

On 03/03/2022 04:47, Satish Patel wrote:
> Folks,
>
> I built a 30 node HPC environment on openstack using mellanox 
> infiniband nic for high speed MPI messaging. So far everything works. 
> Now I am looking for a HPC PFS (parallel file system)  similar to 
> Luster which I can mount on all HPC vms to run MPI jobs.
>
> I was reading on google and saw some CERN videos and they are using 
> Ceph (cephfs) for PFS. Also i was reading CephFS + Manila is a good 
> choice for HPC on openstack design.
>
> Does anyone have any experience with HPC storage for Openstack? Please 
> advice or share your experience :)  Thanks in advance.
>
> ~S



More information about the openstack-discuss mailing list