[openstack-dev] [oslo] Asyncio and oslo.messaging

Yuriy Taraday yorik.sar at gmail.com
Wed Jul 9 10:36:00 UTC 2014


On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow <harlowja at yahoo-inc.com>
wrote:

> I think clints response was likely better than what I can write here, but
> I'll add-on a few things,
>
>
> >How do you write such code using taskflow?
> >
> >  @asyncio.coroutine
> >  def foo(self):
> >      result = yield from some_async_op(...)
> >      return do_stuff(result)
>
> The idea (at a very high level) is that users don't write this;
>
> What users do write is a workflow, maybe the following (pseudocode):
>
> # Define the pieces of your workflow.
>
> TaskA():
>   def execute():
>       # Do whatever some_async_op did here.
>
>   def revert():
>       # If execute had any side-effects undo them here.
>
> TaskFoo():
>    ...
>
> # Compose them together
>
> flow = linear_flow.Flow("my-stuff").add(TaskA("my-task-a"),
> TaskFoo("my-foo"))
>

I wouldn't consider this composition very user-friendly.


> # Submit the workflow to an engine, let the engine do the work to execute
> it (and transfer any state between tasks as needed).
>
> The idea here is that when things like this are declaratively specified
> the only thing that matters is that the engine respects that declaration;
> not whether it uses asyncio, eventlet, pigeons, threads, remote
> workers[1]. It also adds some things that are not (imho) possible with
> co-routines (in part since they are at such a low level) like stopping the
> engine after 'my-task-a' runs and shutting off the software, upgrading it,
> restarting it and then picking back up at 'my-foo'.
>

It's absolutely possible with coroutines and might provide even clearer
view of what's going on. Like this:

@asyncio.coroutine
def my_workflow(ctx, ...):
    project = yield from ctx.run_task(create_project())
    # Hey, we don't want to be linear. How about parallel tasks?
    volume, network = yield from asyncio.gather(
        ctx.run_task(create_volume(project)),
        ctx.run_task(create_network(project)),
    )
    # We can put anything here - why not branch a bit?
    if create_one_vm:
        yield from ctx.run_task(create_vm(project, network))
    else:
        # Or even loops - why not?
        for i in range(network.num_ips()):
            yield from ctx.run_task(create_vm(project, network))

There's no limit to coroutine usage. The only problem is the library that
would bind everything together.
In my example run_task will have to be really smart, keeping track of all
started tasks, results of all finished ones, skipping all tasks that have
already been done (and substituting already generated results).
But all of this is doable. And I find this way of declaring workflows way
more understandable than whatever would it look like with Flow.add's

Hope that helps make it a little more understandable :)
>
> -Josh
>

PS: I've just found all your emails in this thread in Spam folder. So it's
probable not everybody read them.

-- 

Kind regards, Yuriy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140709/e1217c3f/attachment.html>


More information about the OpenStack-dev mailing list