[openstack-dev] [nova] Question about fixing missing soft deleted rows

Sean Dague sean at dague.net
Thu Sep 15 12:21:59 UTC 2016


On 09/14/2016 09:21 PM, Matt Riedemann wrote:
> I'm looking for other input on a question I have in this change:
> 
> https://review.openstack.org/#/c/345191/4/nova/db/sqlalchemy/api.py
> 
> We've had a few patches like this where we don't (soft) delete entries
> related to an instance when that instance record is (soft) deleted.
> These then cause the archive command to fail because of the referential
> constraint.
> 
> Then we go in and add a new entry in the instance_destroy method so we
> start (soft) deleting *new* things, but we don't cleanup anything old.
> 
> In the change above this is working around the fact we might have
> lingering consoles entries for an instance that's being archived.
> 
> One suggestion I made was adding a database migration that soft deletes
> any console entries where the related instance is deleted (deleted !=
> 0). Is that a bad idea? It's not a schema migration, it's data cleanup
> so archive works. We could do the same thing with a nova-manage command,
> but we don't know that someone has run it like they do with the DB
> migrations.
> 
> Another idea is doing it in the nova-manage db online_data_migrations
> command which should be run on upgrade. If we landed something like that
> in say Ocata, then we could remove the TODO in the archive code in Pike.
> 
> Other thoughts?

Is there a reason that archive doesn't go hunt for these references
first and delete them? I kind of assumed it would handle all the cleanup
logic itself, including this sort of integrity issue.

The data migration would still take time, and a table lock, even though
it's just deletes, so that feels like it should be avoided.

	-Sean

-- 
Sean Dague
http://dague.net



More information about the OpenStack-dev mailing list