[cinder][nova] Local storage in compute node

Sean Mooney smooney at redhat.com
Wed Aug 5 11:40:34 UTC 2020


On Wed, 2020-08-05 at 12:19 +0100, Lee Yarwood wrote:
> On 05-08-20 05:03:29, Eric K. Miller wrote:
> > In case this is the answer, I found that in nova.conf, under the
> > [libvirt] stanza, images_type can be set to "lvm".  This looks like it
> > may do the trick - using the compute node's LVM to provision and mount a
> > logical volume, for either persistent or ephemeral storage defined in
> > the flavor.
> > 
> > Can anyone validate that this is the right approach according to our
> > needs?
> 
> I'm not sure if it is given your initial requirements.
> 
> Do you need full host block devices to be provided to the instance?
> 
> The LVM imagebackend will just provision LVs on top of the provided VG
> so there's no direct mapping to a full host block device with this
> approach.
> 
> That said there's no real alternative available at the moment.
well one alternitive to nova providing local lvm storage is to use
the cinder lvm driver but install it on all compute nodes then 
use the cidner InstanceLocalityFilter to ensure the volume is alocated form the host
the vm is on.
https://docs.openstack.org/cinder/latest/configuration/block-storage/scheduler-filters.html#instancelocalityfilter
on drawback to this is that if the if the vm is moved i think you would need to also migrate the cinder volume
seperatly afterwards.

> 
> > Also, I have read about the LVM device filters - which is important to
> > avoid the host's LVM from seeing the guest's volumes, in case anyone
> > else finds this message.
> 
>  
> Yeah that's a common pitfall when using LVM based ephemeral disks that
> contain additional LVM PVs/VGs/LVs etc. You need to ensure that the host
> is configured to not scan these LVs in order for their PVs/VGs/LVs etc
> to remain hidden from the host:
> 
> 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_filtersĀ 
>  
> 




More information about the openstack-discuss mailing list