[openstack-dev] [Neutron] Quota enforcement

Jay Pipes jaypipes at gmail.com
Wed Jun 17 15:08:25 UTC 2015

On 06/16/2015 11:58 PM, Carl Baldwin wrote:
> On Tue, Jun 16, 2015 at 5:17 PM, Kevin Benton <blak111 at gmail.com> wrote:
>> There seems to be confusion on what causes deadlocks. Can one of you explain
>> to me how an optimistic locking strategy (a.k.a. compare-and-swap)  results
>> in deadlocks?
>> Take the following example where two workers want to update a record:
>> Worker1: "UPDATE items set value=newvalue1 where value=oldvalue"
>> Worker2: "UPDATE items set value=newvalue2 where value=oldvalue"
>> Then each worker checks the count of rows affected by the query. The one
>> that modified 1 gets to proceed, the one that modified 0 must retry.
> Here's my understanding:  In a Galera cluster, if the two are run in
> parallel on different masters, then the second one gets a write
> certification failure after believing that it had succeeded *and*
> reading that 1 row was modified.  The transaction -- when it was all
> prepared for commit -- is aborted because the server finds out from
> the other masters that it doesn't really work.  This failure is
> manifested as a deadlock error from the server that lost.  The code
> must catch this "deadlock" error and retry the entire thing.

Yes, Carl, you are correct.

> I just learned about Mike Bayer's DBFacade from this thread which will
> apparently make the db behave as an active/passive for writes which
> should clear this up.  This is new information to me.

The two things are actually unrelated. You can think of the DBFacade 
work -- specifically the @reader and @writer decorators -- as a slicker 
version of the "use_slave=True" keyword arguments that many DB API 
functions in Nova have, which send SQL SELECT statements that can 
tolerate some slave lag to a slave DB node.

In Galera, however, there are no master and slave nodes. They are all 
"masters", because they all represent exactly the same data on disk, 
since Galera uses synchronous replication [1]. So the @writer and 
@reader decorators of DBFacade are not actually going to be useful for 
separating reads and writes to Galera nodes in the same way that that 
functionality is useful in traditional MySQL master/slave replication 


[1] Technically, it's not synchronous, which implies some sort of 
distributed locking is used to protect the order of writes, and Galera 
does not do that. But, for all intents and purposes, the behaviour of 
the replication is synchronous.

More information about the OpenStack-dev mailing list