[openstack-dev] [nova] Did you know archive_deleted_rows isn't super terrible anymore?
melwittt at gmail.com
Fri Sep 29 19:11:32 UTC 2017
On Fri, 29 Sep 2017 13:49:55 -0500, Matt Riedemann wrote:
> For awhile now actually.
> Someone was asking about when archive_deleted_rows would actually work,
> and the answer is, it should since at least mitaka:
> And starting in Ocata there is the --until-complete option which lets
> you run it continuously until its done, rather than the weird manual
> batching from before:
> So this shouldn't be news, but it might be. So FYI.
True that. However, I want to give people a heads up about something I
learned recently (today actually). I think problems with archive can
arise if you've restarted your database after archiving, and attempt to
do a future archive. The InnoDB engine in MySQL keeps the AUTO_INCREMENT
counter only in memory, so after a restart it selects the maximum value
and adds 1 to use as the next value .
So if you had soft-deleted rows with primary keys 1 through 10 in the
main table and ran archive_deleted_rows, those rows would get inserted
into the shadow table and be hard-deleted from the main table. Then, if
you restarted the database, the primary key AUTO_INCREMENT counter would
be initialized to 1 again and the primary keys you had archived would be
reused. If those new rows with primary keys 1 through 10 were eventually
soft-deleted and then you ran archive_deleted_rows, the archive would
fail with something like, "DBDuplicateEntry:
(pymysql.err.IntegrityError) (1062, u"Duplicate entry '1' for key
'PRIMARY'")". The workaround would be to delete or otherwise move the
archived rows containing duplicate keys out of the shadow table.
More information about the OpenStack-dev