[openstack-dev] [oslo] Asyncio and oslo.messaging

Joshua Harlow harlowja at outlook.com
Fri Jul 11 18:34:19 UTC 2014


Soooo, how about we can continue this in #openstack-state-management (or #openstack-oslo).

Since I think we've all made the point and different viewpoints visible (which was the main intention).

Overall, I'd like to see asyncio more directly connected into taskflow so we can have the best of both worlds.

We just have to be careful in letting people blow their feet off, vs. being to safe; but that discussion I think we can have outside this thread.

Sound good?

-Josh

On Jul 11, 2014, at 9:04 AM, Clint Byrum <clint at fewbar.com> wrote:

> Excerpts from Yuriy Taraday's message of 2014-07-11 03:08:14 -0700:
>> On Thu, Jul 10, 2014 at 11:51 PM, Josh Harlow <harlowja at outlook.com> wrote:
>>> 2. Introspection, I hope this one is more obvious. When the coroutine
>>> call-graph is the workflow there is no easy way to examine it before it
>>> executes (and change parts of it for example before it executes). This is a
>>> nice feature imho when it's declaratively and explicitly defined, you get
>>> the ability to do this. This part is key to handling upgrades that
>>> typically happen (for example the a workflow with the 5th task was upgraded
>>> to a newer version, we need to stop the service, shut it off, do the code
>>> upgrade, restart the service and change 5th task from v1 to v1.1).
>>> 
>> 
>> I don't really understand why would one want to examine or change workflow
>> before running. Shouldn't workflow provide just enough info about which
>> tasks should be run in what order?
>> In case with coroutines when you do your upgrade and rerun workflow, it'll
>> just skip all steps that has already been run and run your new version of
>> 5th task.
>> 
> 
> I'm kind of with you on this one. Changing the workflow feels like self
> modifying code.
> 
>> 3. Dataflow: tasks in taskflow can not just declare workflow dependencies
>>> but also dataflow dependencies (this is how tasks transfer things from one
>>> to another). I suppose the dataflow dependency would mirror to coroutine
>>> variables & arguments (except the variables/arguments would need to be
>>> persisted somewhere so that it can be passed back in on failure of the
>>> service running that coroutine). How is that possible without an
>>> abstraction over those variables/arguments (a coroutine can't store these
>>> things in local variables since those will be lost)?It would seem like this
>>> would need to recreate the persistence & storage layer[5] that taskflow
>>> already uses for this purpose to accomplish this.
>>> 
>> 
>> You don't need to persist local variables. You just need to persist results
>> of all tasks (and you have to do it if you want to support workflow
>> interruption and restart). All dataflow dependencies are declared in the
>> coroutine in plain Python which is what developers are used to.
>> 
> 
> That is actually the problem that using declarative systems avoids.
> 
> 
>    @asyncio.couroutine
>    def add_ports(ctx, server_def):
>        port, volume = yield from asyncio.gather(ctx.run_task(create_port(server_def)),
>                                                 ctx.run_task(create_volume(server_def))
>        if server_def.wants_drbd:
>            setup_drbd(volume, server_def)
> 
>        yield from ctx.run_task(boot_server(volume_az, server_def))
> 
> 
> Now we have a side effect which is not in a task. If booting fails, and
> we want to revert, we won't revert the drbd. This is easy to miss
> because we're just using plain old python, and heck it already even has
> a test case.
> 
> I see this type of thing a lot.. we're not arguing about capabilities,
> but about psychological differences. There are pros and cons to both
> approaches.
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list