[nova] Clean up "building" instances

Sylvain Bauza sylvain.bauza at gmail.com
Mon Feb 20 12:05:10 UTC 2023


Le lun. 20 févr. 2023, 11:33, Eugen Block <eblock at nde.ag> a écrit :

> Hi,
>
> we had a network issue two weeks ago in a HA Victoria cloud resulting
> in a couple of stale resources (in pending state). Most of them I
> could cleanup relatively easy, but two instances are left in
> "building" state, but not yet in the nova database so I can't just
> remove them via 'openstack server delete <UUID>'. I've been looking
> through the various nova databases where traces have been left to get
> an impression where I could intervene (although I don't like
> manipulating the database).
> The VMs are two amphora instances:
>
>
> control01:~ # openstack server list --project service | grep -v ACTIVE
>
> +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------------------------+---------------------------+---------+
> | ID                                   | Name
>                 | Status | Networks
>                                          | Image                     |
> Flavor  |
>
> +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------------------------+---------------------------+---------+
> | 0453a7e5-e4f9-419b-ad71-d837a20ef6bb |
> amphora-0ee32901-0c59-4752-8253-35b66da176ea | BUILD  |
>
> | amphora-x64-haproxy_1.0.0 | amphora |
> | dc8cdc3a-f6b2-469b-af6f-ba2aa130ea9b |
> amphora-4990a47b-fe8a-431a-90ec-5ac2368a5251 | BUILD  |
>
> | amphora-x64-haproxy_1.0.0 | amphora
>
> |+--------------------------------------+----------------------------------------------+--------+-------------------------------------------------------------------------------------+---------------------------+---------+
>
> The database tables referring to the UUID
> 0453a7e5-e4f9-419b-ad71-d837a20ef6bb are these:
>
> nova_cell0/instance_id_mappings.ibd
> nova_cell0/instance_info_caches.ibd
> nova_cell0/instance_extra.ibd
> nova_cell0/instances.ibd
> nova_cell0/instance_system_metadata.ibd
> octavia/amphora.ibd
> nova_api/instance_mappings.ibd
> nova_api/request_specs.ibd
>
> My first approach would be to update the nova_cell0.instances table
> and edit the fields 'vm_state' and 'task_state', or even remove the
> intire row. But I don't know about the implications this would have on
> the other tables, so I'd like to know how you would recommend to deal
> with these orphans. Any comment is appreciated!
>

Just a simple thing : reset their states.


> Thanks,
> Eugen
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230220/611e6087/attachment.htm>


More information about the openstack-discuss mailing list