Folks,
I built a 30 node HPC environment on openstack using mellanox infiniband nic for high speed MPI messaging. So far everything works. Now I am looking for a HPC PFS (parallel file system) similar to Luster which I can mount on all HPC vms to run MPI jobs.
I was reading on google and saw some CERN videos and they are using Ceph (cephfs) for PFS. Also i was reading CephFS + Manila is a good choice for HPC on openstack design.
Does anyone have any experience with HPC storage for Openstack? Please advice or share your experience :) Thanks in advance.
~S