[nova] workarounds and operator experience around bug 1522307/1908133
Sean Mooney
smooney at redhat.com
Tue Jan 5 18:38:14 UTC 2021
On Tue, 2021-01-05 at 14:17 -0300, Rodrigo Barbieri wrote:
> Hi Nova folks and OpenStack operators!
>
> I have had some trouble recently where while using the "images_type = rbd"
> libvirt option my ceph cluster got filled up without I noticing and froze
> all my nova services and instances.
>
> I started digging and investigating why and how I could prevent or
> workaround this issue, but I didn't find a very reliable clean way.
>
> I documented all my steps and investigation in bug 1908133 [0]. It has been
> marked as a duplicate of 1522307 [1] which has been around for quite some
> time, so I am wondering if any operators have been using nova + ceph in
> production with "images_type = rbd" config set and how you have been
> handling/working around the issue.
this is indeed a know issue and the long term plan to fix it was to track shared storae
as a sharing resouce provide in plamcent. that never happend so there si currenlty no mechanium
available to prevent this explcitly in nova.
the disk filter which is nolonger used could prevnet the boot of a vm that would fill the ceph pool but
it could not protect against two concurrent request form filling the pool.
placement can protect against that due to the transational nature of allocations which serialise
all resouce useage however since each host reports the total size of the ceph pool as its local storage that wont work out of the box.
as a quick hack what you can do is set the [DEFAULT]/disk_allocation_ratio=(1/number of compute nodes)
https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio
on each of your compute agents configs.
that will prevent over subscription however it has other negitve sidefects.
mainly that you will fail to scudle instance that could boot if a host exced its 1/n usage
so unless you have perfectly blanced consumtion this is not a good approch.
a better appoch but one that requires external scripting is to have a chron job that will update the resrved
usaage of each of the disk_gb inventores to the actull amount of of stoarge allocated form the pool.
the real fix however is for nova to tack its shared usage in placment correctly as a sharing resouce provide.
its possible you might be able to do that via the porvider.yaml file
by overriding the local disk_gb to 0 on all comupte nodes
then creating a singel haring resouce provider of disk_gb that models the ceph pool.
https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html
currently that does not support the addtion of providers to placment agggreate so while it could be used to 0 out the comptue node
disk inventoies and to create a sharing provider it with the MISC_SHARES_VIA_AGGREGATE trait it cant do the final step of mapping
which compute nodes can consume form sharing provider via the agggrate but you could do that form.
that assume that "sharing resouce provdiers" actully work.
bacialy what it comes down to today is you need to monitor the avaiable resouce yourslef externally and ensure you never run out of space.
that sucks but untill we proably track things in plamcent there is nothign we can really do.
the two approch i suggested above might work for a subset of usecasue but really this is a feature that need native suport in nova to adress properly.
>
> Thanks in advance!
>
> [0] https://bugs.launchpad.net/nova/+bug/1908133
> [1] https://bugs.launchpad.net/nova/+bug/1522307
>
More information about the openstack-discuss
mailing list