[cinder] OpenStack lvm and Shared Storage

Sean Mooney smooney at redhat.com
Mon Jun 28 18:14:40 UTC 2021


On Mon, 2021-06-28 at 18:30 +0200, Gorka Eguileor wrote:
> On 26/06, pradyumna borge wrote:
> > Hi,
> > 
> > In a mulit-node setup do we need to provide shared storage via Cinder
> > when setting up the second compute node?
> > 
> > In a typical mulit-node setup we will have:
> > 1. First node as Controller node acting as a Compute node too. This
> >    will have Cinder *lvm*.
> > 2. Second node as Compute node.
> >    1. Will this node have any storage via lvm? If yes then how will
> >       the first compute node access storage on the second node?
> >    2. Likewise, how can the VMs on this second node access storage on
> >       the first compute node?
> > 
> > My other questions are:
> > 1. So if I spawn a VM on the second Compute node, where will the disks
> >    of the VM reside?
> > 2. Can I attach attach a disk on the first node to a VM on the second
> >    node?
> > 3. Do I have to configure NFS storge as shared storage for Cinder?
> > 4. Does Cinder take care of sharing the disks (I dont think so)
> > 5. What are the steps to setup devstack for multi-node and multi
> >    storage (nfs and lvm)
> > 
> > ~ shree
> > 
> 
> Hi,
> 
> I believe there may be some misunderstandings on how OpenStack operates.
> 
> Some clarifications:
> 
> Nova:
> 
> - Can run VMs without Cinder volumes, using only ephemeral volumes that
>   are stored in compute's local disk.
> - Can run with ephemeral local boot volumes and attach Cinder external
>   volumes.
> - Can run with Cinder boot volumes.
> 
> Cinder:
> 
> Cinder-volume usually connects to an external storage solution that is
> not running on the controller node itself, except when LVM is used.  In
> that case the volume is local no the node where cinder-volume is running
> and the host exports the volume via iSCSI so any compute node can
> connect or the cinder-backup service running on any controller node can
> connect.
> 
> But since the volume data is only stored in that specific node, it means
> that when the node is offline no cinder volume can be used, so it's
> usually only used for POCs.
well pocs or small scale deployment like 3-5 node cluseter that might be deployed
at the edge or labs.

what i have seen people suggest in the past was to use a drbd volume
https://linbit.com/drbd/ for the lvm PV  or similarly use a san/disk shleve to provide the storage for lvm  with reduntant connections
to multiple hosts  and use pacemakeer to manage the cinder volume processs in active backup but in pratice
you should really only use it if our ok wiht only one copy of your data. you can hack aroudn its lack fo ha support
but its not the right direction 


cinder does not explictly garentee that if the host runing cinder volume goes down that you will be
aboule to still acess your data but in pratice you often can. as such its often implictly assumed that
cinder storage is some how redunant.

e.g. with ceph if you have only one instance of cinder volumen and that host goes down the vms will
still be ablel to connect to the ceph cluster assuming it was also not on that host. its only the managblity
that would be impacted and that is mitigated by the fact you can have multiple instnace of cinder volume runing
managing the same ceph cluster.

trust gorka when they advise against the use of lvm in production. while possible it wont fulfil the expectaion of
consomer of cinder that assume there data is safe.
> 
> There are multiple ways to deploying devstack with multiple backends,
> but one way to do it is using the CINDER_ENABLED_BACKENDS variable in
> your local.conf file.
> 
> I never use NFS, but for example, to have 2 LVM backends:
> 
>   CINDER_ENABLED_BACKENDS=lvm:lvmdriver-1,lvm:lvmdriver-2
> 
> To enable the Ceph plugin and have lvm and ceph with:
> 
>   enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph
> 
>   CINDER_ENABLED_BACKENDS=lvm,ceph
> 
> Cheers,
> Gorka.
> 
> 





More information about the openstack-discuss mailing list