[openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation
Doug Hellmann
doug at doughellmann.com
Mon May 15 18:55:10 UTC 2017
Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +0000:
> Can someone comment on whether the following scenario has been discussed
> before or whether this is viewed by the community as a bug?
>
> While debugging a couple of different issues our investigation has lead
> us down the path of needing to look at whether the oslo concurrency lock
> utilities are working properly or not. What we found is that it is
> possible for a greenthread to continuously acquire a lock even though
> there are other threads queued up waiting for the lock.
>
> For instance, a greenthread acquires a lock, does some work, releases
> the lock, and then needs to repeat this process over several iterations.
> While the first greenthread holds the lock other greenthreads come along and
> attempt to acquire the lock. Those subsequent greenthreads are added to the
> waiters list and suspended. The observed behavior is that as long as the
> first greenthread continues to run without ever yielding it will always
> re-acquire the lock even before any of the waiters.
>
> To illustrate my point I have included a short program that shows the
> effect of multiple threads contending for a lock with and without
> voluntarily yielding. The code follows, but the output from both
> sample runs are included here first.
>
> In both examples the output is formatted as "worker=XXX: YYY" where XXX
> is the worker number, and YYY is the number of times the worker has been
> executed while holding the lock.
>
> In the first example, notice that each worker gets to finish all of its
> tasks before any subsequence works gets to run even once.
>
> In the second example, notice that the workload is fair and each worker
> gets to hold the lock once before passing it on to the next in line.
>
> Example1 (without voluntarily yielding):
> =====
> worker=0: 1
> worker=0: 2
> worker=0: 3
> worker=0: 4
> worker=1: 1
> worker=1: 2
> worker=1: 3
> worker=1: 4
> worker=2: 1
> worker=2: 2
> worker=2: 3
> worker=2: 4
> worker=3: 1
> worker=3: 2
> worker=3: 3
> worker=3: 4
>
>
>
> Example2 (with voluntarily yielding):
> =====
> worker=0: 1
> worker=1: 1
> worker=2: 1
> worker=3: 1
> worker=0: 2
> worker=1: 2
> worker=2: 2
> worker=3: 2
> worker=0: 3
> worker=1: 3
> worker=2: 3
> worker=3: 3
> worker=0: 4
> worker=1: 4
> worker=2: 4
> worker=3: 4
>
>
>
> Code:
> =====
> import eventlet
> eventlet.monkey_patch
That's not calling monkey_patch -- there are no '()'. Is that a typo?
lock() claims to work differently when monkey_patch() has been
called. Without doing the monkey patching, I would expect the thread
to have to explicitly yield control.
Did you see the problem you describe in production code, or just in this
sample program?
Doug
>
> from oslo_concurrency import lockutils
>
> workers = {}
>
> synchronized = lockutils.synchronized_with_prefix('foo')
>
> @synchronized('bar')
> def do_work(index):
> global workers
> workers[index] = workers.get(index, 0) + 1
> print "worker=%s: %s" % (index, workers[index])
>
>
> def worker(index, nb_jobs, sleep):
> for x in xrange(0, nb_jobs):
> do_work(index)
> if sleep:
> eventlet.greenthread.sleep(0) # yield
> return index
>
>
> # hold the lock before starting workers to make sure that all worker queue up
> # on the lock before any of them actually get to run.
> @synchronized('bar')
> def start_work(pool, nb_workers=4, nb_jobs=4, sleep=False):
> for i in xrange(0, nb_workers):
> pool.spawn(worker, i, nb_jobs, sleep)
>
>
> print "Example1: sleep=False"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool)
> pool.waitall()
>
>
> print "Example2: sleep=True"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool, sleep=True)
> pool.waitall()
>
>
>
>
> Regards,
> Allain
>
>
> Allain Legacy, Software Developer, Wind River
> direct 613.270.2279 fax 613.492.7870 skype allain.legacy
> 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5
>
>
>
More information about the OpenStack-dev
mailing list