[nova][cinder] Providing ephemeral storage to instances - Cinder or Nova
Christian Rohmann
christian.rohmann at inovex.de
Fri Mar 24 15:28:47 UTC 2023
Hello OpenStack-discuss,
I am currently looking into how one can provide fast ephemeral storage
(backed by local NVME drives) to instances.
There seem to be two approaches and I would love to double-check my
thoughts and assumptions.
1) *Via Nova* instance storage and the configurable "ephemeral" volume
for a flavor
a) We currently use Ceph RBD als image_type
(https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.images_type),
so instance images are stored in Ceph, not locally on disk. I believe
this setting will also cause ephemeral volumes (destination_local) to be
placed on a RBD and not /var/lib/nova/instances?
Or is there a setting to set a different backend for local block devices
providing "ephemeral" storage? So RBD for the root disk and a local LVM
VG for ephemeral?
b) Will an ephemeral volume also be migrated when the instance is
shutoff as with live-migration?
Or will there be an new volume created on the target host? I am asking
because I want to avoid syncing 500G or 1T when it's only "ephemeral"
and the instance will not expect any data on it on the next boot.
c) Is the size of the ephemeral storage for flavors a fixed size or just
the upper bound for users? So if I limit this to 1T, will such a flavor
always provision a block device with his size?
I suppose using LVM this will be thin provisioned anyways?
2) *Via Cinder*, running cinder-volume on each compute node to provide a
volume type "ephemeral", using e.g. the LVM driver
a) While not really "ephemeral" and bound to the instance lifecycle,
this would allow users to provision ephemeral volume just as they need them.
I suppose I could use backend specific quotas
(https://docs.openstack.org/cinder/latest/cli/cli-cinder-quotas.html#view-block-storage-quotas)
to
limit the number of size of such volumes?
b) Do I need to use the instance locality filter
(https://docs.openstack.org/cinder/latest/contributor/api/cinder.scheduler.filters.instance_locality_filter.html)
then?
c) Since a volume will always be bound to a certain host, I suppose
this will cause side-effects to instance scheduling?
With the volume remaining after an instance has been destroyed (beating
the purpose of it being "ephemeral") I suppose any other instance
attaching this volume will
be scheduling on this very machine? Is there any way around this? Maybe
a driver setting to have such volumes "self-destroy" if they are not
attached anymore?
d) Same question as with Nova: What happens when an instance is
live-migrated?
Maybe others also have this use case and you can share your solution(s)?
Thanks and with regards
Christian
More information about the openstack-discuss
mailing list