Remove config drive of instances
Hi, We are on the Openstack Queens version. We added the config drive option during its build but never used it. How can we remove it? The "OpenStack server delete --no-config-drive" option isn't available due to the older version we are using. What is the best way to eliminate it, hopefully without rebooting the instances? Thanks, IM
On 27/02/2025 19:18, Pardhiv Karri wrote:
Hi,
We are on the Openstack Queens version. We added the config drive option during its build but never used it. How can we remove it? The "OpenStack server delete --no-config-drive" option isn't available due to the older version we are using. What is the best way to eliminate it, hopefully without rebooting the instances?
we do not support changing this durign the lifetime of an instnace you coudl hack the db but that the only option im not sure why you want to remove it?
Thanks, IM
Hi Sean, Thank you for the quick response. We recently moved the backend storage from Ceph to PureStorage. Now all the instance disks are in PureStorage but the config drives are in Ceph. For just a few single digit GB we need to maintain the entire Ceph cluster. So decided to completely get rid of these drives and Ceph. Thanks, IM On Thu, Feb 27, 2025 at 11:58 AM Sean Mooney <smooney@redhat.com> wrote:
On 27/02/2025 19:18, Pardhiv Karri wrote:
Hi,
We are on the Openstack Queens version. We added the config drive option during its build but never used it. How can we remove it? The "OpenStack server delete --no-config-drive" option isn't available due to the older version we are using. What is the best way to eliminate it, hopefully without rebooting the instances?
we do not support changing this durign the lifetime of an instnace
you coudl hack the db but that the only option
im not sure why you want to remove it?
Thanks, IM
This is a historical weirdness. I think we should consider cleaning up -- ceph is special cased as a config drive store compared to other backends. IIRC this happened because back then we expected the config drives to mostly be on shared storage, but then ceph turned up and broke that assumption. Specifically, https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5... onwards calls https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend... which imports the image and then cleans up the temporary local copy. For reference this code was added in https://review.opendev.org/c/openstack/nova/+/123073 as a response to https://review.opendev.org/c/openstack/nova/+/214773 being blocked from merging and me feeling guilty about that. The place I think this is especially weird right now is that if your instance has all its disks on cinder, and you add a config drive, then suddenly live migration might need to do a streamed block migration just for the config drive, which doesn't seem great. The obvious alternative is to have config drives in cinder. How would people feel about config drives being very small cinder volumes instead of a special case? Is there a better thing we could be doing here? If we could agree on a scope for what we think we should do here I'd be willing to consider if I have the time to take a swing at it. On Fri, 28 Feb 2025, 7:03 am Pardhiv Karri, <meher4india@gmail.com> wrote:
Hi Sean, Thank you for the quick response. We recently moved the backend storage from Ceph to PureStorage. Now all the instance disks are in PureStorage but the config drives are in Ceph. For just a few single digit GB we need to maintain the entire Ceph cluster. So decided to completely get rid of these drives and Ceph.
Thanks, IM
On Thu, Feb 27, 2025 at 11:58 AM Sean Mooney <smooney@redhat.com> wrote:
On 27/02/2025 19:18, Pardhiv Karri wrote:
Hi,
We are on the Openstack Queens version. We added the config drive option during its build but never used it. How can we remove it? The "OpenStack server delete --no-config-drive" option isn't available due to the older version we are using. What is the best way to eliminate it, hopefully without rebooting the instances?
we do not support changing this durign the lifetime of an instnace
you coudl hack the db but that the only option
im not sure why you want to remove it?
Thanks, IM
Hi Michael, Thank you for the detailed explanation. I don't want to move them but completely remove them from our existing instances. Thanks, IM On Fri, Feb 28, 2025 at 7:59 PM Michael Still <mikal@stillhq.com> wrote:
This is a historical weirdness. I think we should consider cleaning up -- ceph is special cased as a config drive store compared to other backends. IIRC this happened because back then we expected the config drives to mostly be on shared storage, but then ceph turned up and broke that assumption.
Specifically, https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5... onwards calls https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend... which imports the image and then cleans up the temporary local copy. For reference this code was added in https://review.opendev.org/c/openstack/nova/+/123073 as a response to https://review.opendev.org/c/openstack/nova/+/214773 being blocked from merging and me feeling guilty about that.
The place I think this is especially weird right now is that if your instance has all its disks on cinder, and you add a config drive, then suddenly live migration might need to do a streamed block migration just for the config drive, which doesn't seem great.
The obvious alternative is to have config drives in cinder. How would people feel about config drives being very small cinder volumes instead of a special case? Is there a better thing we could be doing here?
If we could agree on a scope for what we think we should do here I'd be willing to consider if I have the time to take a swing at it.
On Fri, 28 Feb 2025, 7:03 am Pardhiv Karri, <meher4india@gmail.com> wrote:
Hi Sean, Thank you for the quick response. We recently moved the backend storage from Ceph to PureStorage. Now all the instance disks are in PureStorage but the config drives are in Ceph. For just a few single digit GB we need to maintain the entire Ceph cluster. So decided to completely get rid of these drives and Ceph.
Thanks, IM
On Thu, Feb 27, 2025 at 11:58 AM Sean Mooney <smooney@redhat.com> wrote:
On 27/02/2025 19:18, Pardhiv Karri wrote:
Hi,
We are on the Openstack Queens version. We added the config drive option during its build but never used it. How can we remove it? The "OpenStack server delete --no-config-drive" option isn't available due to the older version we are using. What is the best way to eliminate it, hopefully without rebooting the instances?
we do not support changing this durign the lifetime of an instnace
you coudl hack the db but that the only option
im not sure why you want to remove it?
Thanks, IM
-- *Pardhiv Karri* "Rise and Rise again until LAMBS become LIONS"
On 01/03/2025 06:09, Pardhiv Karri wrote:
Hi Michael,
Thank you for the detailed explanation. I don't want to move them but completely remove them from our existing instances.
the only way to do that is with db surgery. but you could try and move them to local storage using shleve as i suggeted in my previous reply.
Thanks, IM
On Fri, Feb 28, 2025 at 7:59 PM Michael Still <mikal@stillhq.com> wrote:
This is a historical weirdness. I think we should consider cleaning up -- ceph is special cased as a config drive store compared to other backends. IIRC this happened because back then we expected the config drives to mostly be on shared storage, but then ceph turned up and broke that assumption.
Specifically, https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5... onwards calls https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend... which imports the image and then cleans up the temporary local copy. For reference this code was added in https://review.opendev.org/c/openstack/nova/+/123073 as a response to https://review.opendev.org/c/openstack/nova/+/214773 being blocked from merging and me feeling guilty about that.
The place I think this is especially weird right now is that if your instance has all its disks on cinder, and you add a config drive, then suddenly live migration might need to do a streamed block migration just for the config drive, which doesn't seem great.
The obvious alternative is to have config drives in cinder. How would people feel about config drives being very small cinder volumes instead of a special case? Is there a better thing we could be doing here?
If we could agree on a scope for what we think we should do here I'd be willing to consider if I have the time to take a swing at it.
On Fri, 28 Feb 2025, 7:03 am Pardhiv Karri, <meher4india@gmail.com> wrote:
Hi Sean, Thank you for the quick response. We recently moved the backend storage from Ceph to PureStorage. Now all the instance disks are in PureStorage but the config drives are in Ceph. For just a few single digit GB we need to maintain the entire Ceph cluster. So decided to completely get rid of these drives and Ceph.
Thanks, IM
On Thu, Feb 27, 2025 at 11:58 AM Sean Mooney <smooney@redhat.com> wrote:
On 27/02/2025 19:18, Pardhiv Karri wrote: > Hi, > > We are on the Openstack Queens version. We added the config drive > option during its build but never used it. How can we remove it? The > "OpenStack server delete --no-config-drive" option isn't available due > to the older version we are using. What is the best way to eliminate > it, hopefully without rebooting the instances?
we do not support changing this durign the lifetime of an instnace
you coudl hack the db but that the only option
im not sure why you want to remove it?
> > Thanks, > IM > > >
-- *Pardhiv Karri* "Rise and Rise again untilLAMBSbecome LIONS"
Thanks for the very detailed breakdown, Sean, We run an all-cinder-storage Openstack, and have experienced some of the issues you've described with config drives; they're not insurmountable but just a little annoying. Being able to have some variability around how config drives are built or assigned definitely sounds great; it also feels more cloud-native-y, allowing for a greater separation between compute and storage. I did have one question;
On the api side i think we would have to tweak the BDMs to allow you to specify that the config drive should be a cinder volume and possibly what volume type and how to attach it (cdrom or block device).
You write about needing a microapi version that would allow the configuration of configdrives to be configured as a specific volume type (which all makes sense) but you mention mounting a config drive as "cd-rom" type. One very surprising thing we have not been able to find anywhere in OpenDev documentation is mounting images or volumes in a way that lets the operating system on the instance treat it like installation media (e.g. mounting an ISO to install SQL-Server on a windows instance). The cinder API https://docs.openstack.org/api-ref/block-storage/v3/ makes no mention of attaching a volume "as a cd-rom". It might have been a throwaway comment, but I'll be damned if there's been a way to attach cinder volumes (or even glance images) to instances and present them as "CD-ROM" this entire time, and I just couldn't find it. Kind Regards, Joel McLean -----Original Message----- From: Sean Mooney <smooney@redhat.com> Sent: Sunday, 2 March 2025 1:22 AM To: Pardhiv Karri <meher4india@gmail.com>; Michael Still <mikal@stillhq.com> Cc: openstack-discuss@lists.openstack.org Subject: Re: Remove config drive of instances On 01/03/2025 06:09, Pardhiv Karri wrote:
Hi Michael,
Thank you for the detailed explanation. I don't want to move them but completely remove them from our existing instances.
the only way to do that is with db surgery. but you could try and move them to local storage using shleve as i suggeted in my previous reply.
Thanks, IM
On Fri, Feb 28, 2025 at 7:59 PM Michael Still <mikal@stillhq.com> wrote:
This is a historical weirdness. I think we should consider cleaning up -- ceph is special cased as a config drive store compared to other backends. IIRC this happened because back then we expected the config drives to mostly be on shared storage, but then ceph turned up and broke that assumption.
Specifically, https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5... onwards calls https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend... which imports the image and then cleans up the temporary local copy. For reference this code was added in https://review.opendev.org/c/openstack/nova/+/123073 as a response to https://review.opendev.org/c/openstack/nova/+/214773 being blocked from merging and me feeling guilty about that.
The place I think this is especially weird right now is that if your instance has all its disks on cinder, and you add a config drive, then suddenly live migration might need to do a streamed block migration just for the config drive, which doesn't seem great.
The obvious alternative is to have config drives in cinder. How would people feel about config drives being very small cinder volumes instead of a special case? Is there a better thing we could be doing here?
If we could agree on a scope for what we think we should do here I'd be willing to consider if I have the time to take a swing at it.
On Fri, 28 Feb 2025, 7:03 am Pardhiv Karri, <meher4india@gmail.com> wrote:
Hi Sean, Thank you for the quick response. We recently moved the backend storage from Ceph to PureStorage. Now all the instance disks are in PureStorage but the config drives are in Ceph. For just a few single digit GB we need to maintain the entire Ceph cluster. So decided to completely get rid of these drives and Ceph.
Thanks, IM
On Thu, Feb 27, 2025 at 11:58 AM Sean Mooney <smooney@redhat.com> wrote:
On 27/02/2025 19:18, Pardhiv Karri wrote: > Hi, > > We are on the Openstack Queens version. We added the config drive > option during its build but never used it. How can we remove it? The > "OpenStack server delete --no-config-drive" option isn't available due > to the older version we are using. What is the best way to eliminate > it, hopefully without rebooting the instances?
we do not support changing this durign the lifetime of an instnace
you coudl hack the db but that the only option
im not sure why you want to remove it?
> > Thanks, > IM > > >
-- *Pardhiv Karri* "Rise and Rise again untilLAMBSbecome LIONS"
On 01/03/2025 03:58, Michael Still wrote:
This is a historical weirdness.
yes and no even when using boot form volume instance, all disks provided by the flavor other then the root are allocated using the stroage backend specifed by iamges_type. that explicitly supported because we don't really want swap an ephemeral scratch space to pay the overhead of networked storage. its actually one of the recommend ways or running openshift on openstack. you use boot form volume vms with a local ephemeral disk to hold etcd for lower io latency.
I think we should consider cleaning up -- ceph is special cased as a config drive store compared to other backends.
yes and no. it is for nova provisioned storage you are right that we import it into ceph as you noted below but that completed unrelated to boot form volume where the cinder volume happens to come from ceph. the only way V code will run for BFV is if the host also happens ot have images_type=rbd unless i missed something.
IIRC this happened because back then we expected the config drives to mostly be on shared storage, but then ceph turned up and broke that assumption.
Specifically, https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5... onwards calls https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend... which imports the image and then cleans up the temporary local copy. For reference this code was added in https://review.opendev.org/c/openstack/nova/+/123073 as a response to https://review.opendev.org/c/openstack/nova/+/214773 being blocked from merging and me feeling guilty about that.
The place I think this is especially weird right now is that if your instance has all its disks on cinder, and you add a config drive, then suddenly live migration might need to do a streamed block migration just for the config drive, which doesn't seem great.
nova will auto detect and do the right thing for the config drive even the bfv case. and it will only migrate the local files.
The obvious alternative is to have config drives in cinder. How would people feel about config drives being very small cinder volumes instead of a special case? Is there a better thing we could be doing here?
i guess thats and option. i think cinder has a minimum volume size of 1G to make this work pro0pely we would need to annotate the volume to be used as a cd_rom or use vfat. it will count against the volume and volume space quotas too which the current config drive does not. I would be hesitant to make this a config option and i don't think we could just make that change by default. I think we may need to consider if this needs an api micro-version to opt into this when creating new vms or if the existign bdms are expressive enough. On the api side i think we would have to tweak the BDMs to allow you to specify that the config drive should be a cinder volume and possibly what volume type and how to attach it (cdrom or block device).
If we could agree on a scope for what we think we should do here I'd be willing to consider if I have the time to take a swing at it.
On Fri, 28 Feb 2025, 7:03 am Pardhiv Karri, <meher4india@gmail.com> wrote:
Hi Sean, Thank you for the quick response. We recently moved the backend storage from Ceph to PureStorage. Now all the instance disks are in PureStorage but the config drives are in Ceph. For just a few single digit GB we need to maintain the entire Ceph cluster. So decided to completely get rid of these drives and Ceph.
how exactly did you do this? as far as im aware we only ever store config drive in ceph if your using images_type=rbd nova does not support changing the storage backend or migrating between storage backends. we don't support PureStorage as an image backend so either you have converted form images_type=rbd to using "local storage" i.e. image_type=raw or images_type=qcow and you put /var/lib/nova/instances on PureStorage or you did a cinder volume migration to move cidner volume form ceph to PureStorage the problem with the second path is those instances should not have been using ceph for config drive unless you also had images_type=rbd since BFV use the locally configured storage backend for storing config drive. one thing you could try is shelving the instance and unshlvign it. if you have updated images_type to images_type=raw or images_type=qcow unshelve should generate a new config drive using a local file removing the need for ceph. unshelving across storage backend is technically also not supported but its not blocked and in theory it will work to decouple the vms form ceph. it will not clean up the ceph volumes however.
Thanks, IM
On Thu, Feb 27, 2025 at 11:58 AM Sean Mooney <smooney@redhat.com> wrote:
On 27/02/2025 19:18, Pardhiv Karri wrote: > Hi, > > We are on the Openstack Queens version. We added the config drive > option during its build but never used it. How can we remove it? The > "OpenStack server delete --no-config-drive" option isn't available due > to the older version we are using. What is the best way to eliminate > it, hopefully without rebooting the instances?
we do not support changing this durign the lifetime of an instnace
you coudl hack the db but that the only option
im not sure why you want to remove it?
> > Thanks, > IM > > >
participants (4)
-
Joel McLean
-
Michael Still
-
Pardhiv Karri
-
Sean Mooney