[openstack-dev] [neutron] Cross-server locking for neutron server

Jay Pipes jaypipes at gmail.com
Wed Jul 30 22:27:11 UTC 2014


It's not about distributed locking. It's about allowing multiple threads 
to make some sort of progress in the face of a contentious piece of 
code. Obstruction and lock-free algorithms are preferred, IMO, over 
lock-based solutions that sit there and block while something else is 
doing something.

And yeah, I'm using the term lock-free here, when in fact, there is the 
possibility of a lock being held for a very short amount of time in the 
low-level storage engine code...

That said, I completely agree with you on using existing technologies 
and not reinventing wheels where appropriate. If I needed a distributed 
lock, I wouldn't reinvent one in Python ;)

Best
-jay

On 07/30/2014 03:16 PM, Joshua Harlow wrote:
> I'll just start by saying I'm not the expert in what should-be the
> solution for neutron here (this is their developers ultimate
> decision) but I just wanted to add my thoughts....
>
> Jays solution looks/sounds like a spin lock with a test-and-set[1]
> (imho still a lock, no matter the makeup u put on it),
>
> Seems similar to: https://review.openstack.org/#/c/97059/ in concept
> that I also saw recently.
>
> I though start to feel we should be figuring out how to use a proven
> correct locking mechanism (kazoo -> zookeeper, tooz -> memcache,
> redis or zookeeper...) and avoid the premature optimization that we
> seem to be falling into when creating our own types of spin locks,
> optimistic locks and so on... I'd much rather prefer correctness that
> *might* be a little slower than a solution that is hard to debug,
> hard to reason about and requires retry magic numbers/hacks (for
> example that prior keystone review has a magic 10 iteration limit,
> after all who really knows what that magic number should be...),
> especially in cases where it is really necessary (I can't qualify to
> say whether this neutron situation is appropriate for this).
>
> Maybe this is the appropriate time to focus on correct (maybe slower,
> maybe requires zookeeper or redis...) solutions instead of reinvent
> another solution that we will regret in the future. I'd rather not
> put my operators through hell (they will be the ones left at the
> middle of the night trying to figure out why the lock didn't lock)
> when I can avoid it...
>
> Just my 2 cents,
>
> [1] http://en.wikipedia.org/wiki/Test-and-set
>
> -Josh
>
> On Jul 30, 2014, at 1:53 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>
>> On 07/30/2014 12:21 PM, Kevin Benton wrote:
>>> Maybe I misunderstood your approach then.
>>>
>>> I though you were suggesting where a node performs an "UPDATE
>>> record WHERE record = last_state_node_saw" query and then checks
>>> the number of affected rows. That's optimistic locking by every
>>> definition I've heard of it. It matches the following statement
>>> from the wiki article you linked to as well:
>>>
>>> "The latter situation (optimistic locking) is only appropriate
>>> when there is less chance of someone needing to access the record
>>> while it is locked; otherwise it cannot be certain that the
>>> update will succeed because the attempt to update the record will
>>> fail if another user updates the record first."
>>>
>>> Did I misinterpret how your approach works?
>>
>> The record is never "locked" in my approach, why is why I don't
>> like to think of it as optimistic locking. It's more like
>> "optimistic read and update with retry if certain conditions
>> continue to be met..." :)
>>
>> To be very precise, the record is never locked explicitly -- either
>> through the use of SELECT FOR UPDATE or some explicit file or
>> distributed lock. InnoDB won't even hold a lock on anything, as it
>> will simply add a new version to the row using its MGCC (sometimes
>> called MVCC) methods.
>>
>> The technique I am showing in the patch relies on the behaviour of
>> the SQL UPDATE statement with a WHERE clause that contains certain
>> columns and values from the original view of the record. The
>> behaviour of the UPDATE statement will be a NOOP when some other
>> thread has updated the record in between the time that the first
>> thread read the record, and the time the first thread attempted to
>> update the record. The caller of UPDATE can detect this NOOP by
>> checking the number of affected rows, and retry the UPDATE if
>> certain conditions remain kosher...
>>
>> So, there's actually no locks taken in the entire process, which is
>> why I object to the term optimistic locking :) I think where the
>> confusion has been is that the initial SELECT and the following
>> UPDATE statements are *not* done in the context of a single SQL
>> transaction...
>>
>> Best, -jay
>>
>>> On Wed, Jul 30, 2014 at 11:07 AM, Jay Pipes <jaypipes at gmail.com
>>> <mailto:jaypipes at gmail.com>> wrote:
>>>
>>> On 07/30/2014 10:53 AM, Kevin Benton wrote:
>>>
>>> Using the UPDATE WHERE statement you described is referred to as
>>> optimistic locking. [1]
>>>
>>> https://docs.jboss.org/__jbossas/docs/Server___Configuration_Guide/4/html/__The_CMP_Engine-Optimistic___Locking.html
>>>
>>>
<https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html>
>>>
>>>
>>> SQL != JBoss.
>>>
>>> It's not optimistic locking in the database world. In the
>>> database world, optimistic locking is an entirely separate
>>> animal:
>>>
>>> http://en.wikipedia.org/wiki/__Lock_(database)
>>> <http://en.wikipedia.org/wiki/Lock_(database)>
>>>
>>> And what I am describing is not optimistic lock concurrency in
>>> databases.
>>>
>>> -jay
>>>
>>>
>>>
>>> _________________________________________________ OpenStack-dev
>>> mailing list OpenStack-dev at lists.openstack.__org
>>> <mailto:OpenStack-dev at lists.openstack.org>
>>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>>
>>>
>>>
>>>
>>>
>>>
--
>>> Kevin Benton
>>>
>>>
>>> _______________________________________________ OpenStack-dev
>>> mailing list OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>>
_______________________________________________
>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________ OpenStack-dev mailing
> list OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list