[openstack-dev] [Keystone] Best way to do something MySQL-specific?

Clint Byrum clint at fewbar.com
Tue Jul 9 03:06:22 UTC 2013


Excerpts from Adam Young's message of 2013-07-08 17:32:30 -0700:
> On 07/08/2013 04:35 PM, Clint Byrum wrote:
> > Excerpts from Adam Young's message of 2013-07-08 13:18:55 -0700:
> >> On 07/01/2013 01:35 PM, Clint Byrum wrote:
> >>> The way the new keystone-manage command "token_flush" works right now
> >>> is quite broken by MySQL and InnoDB's gap locking behavior:
> >>>
> >>> https://bugs.launchpad.net/1188378
> >>>
> >>> Presumably other SQL databases like PostgreSQL will have similar problems
> >>> with doing massive deletes, but I am less familiar with them.
> >>>
> >>> I am trying to solve this in keystone, and my first attempt is here:
> >>>
> >>> https://review.openstack.org/#/c/32044/
> >>>
> >>> However, MySQL does not support using "LIMIT" in a sub-query that
> >>> is feeding an IN() clause, so that approach will not work. Likewise,
> >>> sqlalchemy does not support the MySQL specific extension to DELETE which
> >>> allows it to have a LIMIT clause.
> >>>
> >>> Now, I can do some hacky things, like just deleting all of the expired
> >>> tokens from the oldest single second, but that could also potentially
> >>> be millions of tokens, and thus, millions of gaps to lock.
> >>>
> >>> So, there is just not one way to work for all databases, and we have to
> >>> have a special mode for MySQL.
> >>>
> >>> I was wondering if anybody has suggestions and/or examples of how to do
> >>> that with sqlalchemy.
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> In general, if you have millions of roles, you need a real database.  I
> >> would not try to work through SQL Alchemy for this. Instead, you
> >> probably just want to make sure that the token_flush is run fairly
> >> regularly on your database.
> >>
> > I'm not sure I understand you.
> >
> > * I am asking about millions of tokens, not roles.
> Heh, I mean Rows, and somehow type roles.
> 
> > * I am asking about MySQL.. presumably a "real" database.
>   I have to admit I am a bit of a Postgresql Bigot. I don't really 
> consider MySQL a real database, althought it has improved a lot over the 
> years.  I am not up to speed on"InnoDB's gap locking behavior" but it is 
> not something I would expect to be a problem in Postgresql.
> 

You may want to update your snark! MySQL has its warts which are pretty
easy to take shots at, but it has been a "real" ACID compliant database
for well over a decade.

Please have a read at how InnoDB's isolation level handling works
[1]. You can compare it to Postgres [2]. InnoDB's default isolation
level, repeatable read, has a locking behavior that is particularly
ugly for indexes like 'token.valid'. Put simply, you can't add more
valid tokens until you're done deleting any valid expired tokens (if
you mention valid in the delete, which flush_tokens does not).

This is because token.valid == 1 may end up locked during the entire
DELETE. The same could be true for expires too, even the primary key,
though that is typically not locked with ranges.

Yes, Postgres's behaviors are much more desirable here, as inserts almost
go unmentioned in the postgres description of transaction isolation. But
there are well documented ways of working around this weirdness. I had
forgotten that one way is to use READ UNCOMMITTED. That should avoid
any of the large range locks and work with MySQL, Postgres, and Sqlite.

[1] http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html#isolevel_repeatable-read
[2] http://www.postgresql.org/docs/9.1/static/transaction-iso.html#MVCC-SERIALIZABILITY

> > * In the bug, I am suggesting that running token_flush once every
> >    second will be _a disaster_ on a busy site with MySQL because of
> >    the gap locking behavior in InnoDB. We need to delete a small number
> >    per transaction.
> once every second would be strange indeed.  I would think maybe once 
> every five minutes or so.  Schedule your clean up IAW your deployment 
> and usage.
> 
> Deleting a chunk of tokens in bulk would be preferable to doing client 
> side iteration, I can;t see how that would not be the case.
> 

Once every second or once every 5 minutes, the locks pile up the longer
the DELETE takes.

Using pt-archiver, the locks span just a few hundred rows at a time, and
we can also be "gentle" with the server, sleeping a bit between deletes,
which allows us to not waste precious OLTP time on archiving/deleting.



More information about the OpenStack-dev mailing list