[openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

Joshua Harlow harlowja at fastmail.com
Tue Dec 1 17:12:41 UTC 2015


So my takeaway is we need each project to have something like:

https://gist.github.com/harlowja/b4f0ddadbda1f92cc1e2

That could possibly exist in oslo (I just threw it together) but the 
idea is that a thread/greenthread would run that 'run_forever' method in 
that code and it would periodically try to clean off locks by acquiring 
them (with a timeout for acquire) and then deleting the lock path that 
the lock is using (and then releasing the lock).

The problems with that are as mentioned previously, even when we acquire 
a lock (aka the cleaner gets the lock) and then delete the underlying 
file that does *not* release other entities trying to acquire that same 
lock file (especially ones that blocked themselves in there acquire() 
method before the deletion started) so that's where either we need to do 
something like sean stated or we need to IMHO get away from having a 
lock file that is deleted at all (and use byte-ranges inside a single 
lock file, that single lock file would never be deleted in the first 
place) or we need to get off file locks in the first place (but ya, 
that's like umm a bigger issue...)

Such a single lock file would then use something like the following to 
get locks from it:

class LockSharder(object):

     def __init__(self, offset_locks):
         self.offset_locks = offset_locks

     def get_lock(self, name):
         return self.offset_locks[hash(name) % len(self.offset_locks)]

So there are a few ideas...

Duncan Thomas wrote:
>
>
> On 1 December 2015 at 13:40, Sean Dague <sean at dague.net
> <mailto:sean at dague.net>> wrote:
>
>
>     The current approach means locks block on their own, are processed in
>     the order they come in, but deletes aren't possible. The busy lock would
>     mean deletes were normal. Some extra cpu spent on waiting, and lock
>     order processing would be non deterministic. It's trade offs, but I
>     don't know anywhere that we are using locks as queues, so order
>     shouldn't matter. The cpu cost on the busy wait versus the lock file
>     cleanliness might be worth making. It would also let you actually see
>     what's locked from the outside pretty easily.
>
>
> The cinder locks are very much used as queues in places, e.g. making
> delete wait until after an image operation finishes. Given that cinder
> can already bring a node into resource issues while doing lots of image
> operations concurrently (such as creating lots of bootable volumes at
> once) I'd be resistant to anything that makes it worse to solve a
> cosmetic issue.
>
>
> --
> Duncan Thomas
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list