[nova][ops] 'Duplicate entry for primary key' problem running nova-manage db archive_deleted_rows

Massimo Sgaravatto massimo.sgaravatto at gmail.com
Wed May 8 10:35:30 UTC 2019


Hi

Fron time to time I use to move entries related to deleted instances to
shadow tables, using the command:

nova-manage db archive_deleted_rows

This is now failing [*] for the instance_metadata table because of a
'duplicate entry for the primary key' problem:

DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry '6'
for key 'PRIMARY'") [SQL: u'INSERT INTO shadow_instance_metadata
(created_at, updated_at, deleted_at, deleted, id, `key`, value,
instance_uuid) SELECT instance_metadata.created_at,
instance_metadata.updated_at, instance_metadata.deleted_at,
instance_metadata.deleted, instance_metadata.id, instance_metadata.`key`,
instance_metadata.value, instance_metadata.instance_uuid \nFROM
instance_metadata \nWHERE instance_metadata.deleted != %(deleted_1)s ORDER
BY instance_metadata.id \n LIMIT %(param_1)s'] [parameters: {u'param_1': 1,
u'deleted_1': 0}]


Indeed:

mysql> SELECT instance_metadata.created_at, instance_metadata.updated_at,
instance_metadata.deleted_at, instance_metadata.deleted,
instance_metadata.id, instance_metadata.`key`, instance_metadata.value,
instance_metadata.instance_uuid FROM instance_metadata WHERE
instance_metadata.deleted != 0 ORDER BY instance_metadata.id limit 1;
+---------------------+------------+---------------------+---------+----+-------+-------+--------------------------------------+
| created_at          | updated_at | deleted_at          | deleted | id |
key   | value | instance_uuid                        |
+---------------------+------------+---------------------+---------+----+-------+-------+--------------------------------------+
| 2018-09-20 07:40:56 | NULL       | 2018-09-20 07:54:26 |       6 |  6 |
group | node  | a9000ff7-2298-454c-bf71-9e3c62ec0f3c |
+---------------------+------------+---------------------+---------+----+-------+-------+--------------------------------------+
1 row in set (0.00 sec)


But there is a 5-years old entry (if I am not wrong we were running Havana
at that time) in the shadow table with that id:

mysql> select * from shadow_instance_metadata where id='6';
+---------------------+------------+---------------------+----+----------+---------------------+--------------------------------------+---------+
| created_at          | updated_at | deleted_at          | id | key      |
value               | instance_uuid                        | deleted |
+---------------------+------------+---------------------+----+----------+---------------------+--------------------------------------+---------+
| 2014-11-04 12:57:10 | NULL       | 2014-11-04 13:06:45 |  6 | director |
microbosh-openstack | 5db5b17b-69f2-4f0a-bdd2-efe710268021 |       6 |
+---------------------+------------+---------------------+----+----------+---------------------+--------------------------------------+---------+
1 row in set (0.00 sec)

mysql>


I wonder how could that happen.

Can I simply remove that entry from the shadow table (I am not really
interested to keep it) or are there better (cleaner) way to fix the problem
?


This Cloud is now running Ocata

Thanks, Massimo



[*]
[root at cld-ctrl-01 ~]# nova-manage db archive_deleted_rows --max_rows 1000
--verbose
An error has occurred:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1617, in
main
    ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 691, in
archive_deleted_rows
    run = db.archive_deleted_rows(max_rows)
  File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 2040, in
archive_deleted_rows
    return IMPL.archive_deleted_rows(max_rows=max_rows)
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line
6564, in archive_deleted_rows
    tablename, max_rows=max_rows - total_rows_archived)
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line
6513, in _archive_deleted_rows_for_table
    conn.execute(insert)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
914, in execute
    return meth(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 323, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
1010, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
1146, in _execute_context
    context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
1337, in _handle_dbapi_exception
    util.raise_from_cause(newraise, exc_info)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line
200, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line
1139, in _execute_context
    context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py",
line 450, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 165, in
execute
    result = self._query(query)
  File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 321, in
_query
    conn.query(q)
  File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 860,
in query
    self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line
1061, in _read_query_result
    result.read()
  File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line
1349, in read
    first_packet = self.connection._read_packet()
  File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line
1018, in _read_packet
    packet.check_error()
  File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 384,
in check_error
    err.raise_mysql_exception(self._data)
  File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in
raise_mysql_exception
    raise errorclass(errno, errval)
DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry '6'
for key 'PRIMARY'") [SQL: u'INSERT INTO shadow_instance_metadata
(created_at, updated_at, deleted_at, deleted, id, `key`, value,
instance_uuid) SELECT instance_metadata.created_at,
instance_metadata.updated_at, instance_metadata.deleted_at,
instance_metadata.deleted, instance_metadata.id, instance_metadata.`key`,
instance_metadata.value, instance_metadata.instance_uuid \nFROM
instance_metadata \nWHERE instance_metadata.deleted != %(deleted_1)s ORDER
BY instance_metadata.id \n LIMIT %(param_1)s'] [parameters: {u'param_1': 1,
u'deleted_1': 0}]

[root at cld-ctrl-01 ~]#
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190508/76e7059c/attachment.html>


More information about the openstack-discuss mailing list