[openstack-dev] [rally][users]: Synchronizing between multiple scenario instances.

Boris Pavlovic bpavlovic at mirantis.com
Fri Oct 24 10:02:56 UTC 2014


Behazd,

Ok, for now we can do it in such way.

What I am thinking is that this *logic* should be implemented on
benchmark.runner level.
Cause different load generators are using different approaches to generate
load.
So in case of "serial" runner it's even impossible to make locking.

So what about adding in
https://github.com/stackforge/rally/blob/master/rally/benchmark/runners/base.py#L140
2 abstract methods (one for incrementing, another for waiting)

And implement them for different runners?


Best regards,
Boris Pavlovic


On Thu, Oct 23, 2014 at 12:05 PM, Behzad Dastur (bdastur) <bdastur at cisco.com
> wrote:

>  Hi Boris,
>
> I am still getting my feet wet with rally so some concepts are new, and
> did not quite get your statement regarding the different load generators. I
> am presuming you are referring to the Scenario runner and the different
> “types” of runs.
>
>
>
> What I was looking at is the runner, where we specify the type, times and
> concurrency.  We could have an additional field(s) which would specify the
> synchronization property.
>
>
>
> Essentially, what I have found most useful in the cases where we run
> scenarios/tests in parallel;  is some sort of “barrier”, where at a certain
> point in the run you want all the parallel tasks to reach a specific point
> before continuing.
>
>
>
> Also, I am also considering cases where synchronization is needed within a
> single benchmark case, where the same benchmark scenario:
>
> creates some vms, performs some tasks, deletes the vm
>
>
>
> Just for simplicity as a POC, I tried something with shared memory
> (multiprocessing.Value), which looks something like this:
>
>
>
> class Barrier(object):
>
>     __init__(self, concurrency)
>
>         self.shmem = multiprocessing.Value(‘I’, concurrency)
>
>         self.lock = multiprocessing.Lock()
>
>
>
>     def wait_at_barrier ():
>
>        while self.shmem.value > 0:
>
>            time.sleep(1)
>
>        return
>
>
>
>     def decrement_shm_concurrency_cnt ():
>
>          with self.lock:
>
>              self.shmem.value -=  1
>
>
>
> And from the scenario, it can be called as:
>
>
>
> scenario:
>
>  -- do some action –
>
>   barrobj.decrement_shm_concurrency_cnt()
>
>  sync_helper.wait_at_barrier()
>
> -- do some action –   ß all processes will do this action at almost the
> same time.
>
>
>
> I would be happy to discuss more to get a good common solution.
>
>
>
> regards,
>
> Behzad
>
>
>
>
>
>
>
>
>
>
>
> *From:* boris at pavlovic.ru [mailto:boris at pavlovic.ru] *On Behalf Of *Boris
> Pavlovic
> *Sent:* Tuesday, October 21, 2014 3:23 PM
> *To:* Behzad Dastur (bdastur)
> *Cc:* OpenStack Development Mailing List (not for usage questions);
> Pradeep Chandrasekar (pradeech); John Wei-I Wu (weiwu)
> *Subject:* Re: [openstack-dev] [rally][users]: Synchronizing between
> multiple scenario instances.
>
>
>
> Behzad,
>
>
>
> Unfortunately at this point there is no support of locking between
> scenarios.
>
>
>
>
>
> It will be quite tricky for implementation, because we have different load
> generators, and we will need to find
>
> common solutions for all of them.
>
>
>
> If you have any ideas about how to implement it in such way, I will be
> more than happy to get this in upstream.
>
>
>
>
>
> One of the way that I see is to having some kind of chain-of-benchmarks:
>
>
>
> 1) Like first benchmark is running N VMs
>
> 2) Second benchmarking is doing something with all those benchmarks
>
> 3) Third benchmark is deleting all these VMs
>
>
>
> (where chain elements are atomic actions)
>
>
>
> Probably this will be better long term solution.
>
> Only thing that we should understand is how to store those results and how
> to display them.
>
>
>
>
>
> If you would like to help with this let's start discussing it, in some
> kind of google docs.
>
>
>
> Thoughts?
>
>
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
>
>
> On Wed, Oct 22, 2014 at 2:13 AM, Behzad Dastur (bdastur) <
> bdastur at cisco.com> wrote:
>
> Does rally provide any synchronization mechanism to synchronize between
> multiple scenario, when running in parallel? Rally spawns multiple
> processes, with each process running the scenario.  We need a way to
> synchronize between these to start a perf test operation at the same time.
>
>
>
>
>
> regards,
>
> Behzad
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141024/10a0d840/attachment.html>


More information about the OpenStack-dev mailing list