[openstack-dev] [oslo] Asyncio and oslo.messaging
Joshua Harlow
harlowja at yahoo-inc.com
Mon Jul 7 17:41:34 UTC 2014
So I've been thinking how to respond to this email, and here goes (shields
up!),
First things first; thanks mark and victor for the detailed plan and
making it visible to all. It's very nicely put together and the amount of
thought put into it is great to see. I always welcome an effort to move
toward a new structured & explicit programming model (which asyncio
clearly helps make possible and strongly encourages/requires).
So now to some questions that I've been thinking about how to
address/raise/ask (if any of these appear as FUD, they were not meant to
be):
* Why focus on a replacement low level execution model integration instead
of higher level workflow library or service (taskflow, mistral... other)
integration?
Since pretty much all of openstack is focused around workflows that get
triggered by some API activated by some user/entity having a new execution
model (asyncio) IMHO doesn't seem to be shifting the needle in the
direction that improves the scalability, robustness and crash-tolerance of
those workflows (and the associated projects those workflows are currently
defined & reside in). I *mostly* understand why we want to move to asyncio
(py3, getting rid of eventlet, better performance? new awesomeness...) but
it doesn't feel that important to actually accomplish seeing the big holes
that openstack has right now with scalability, robustness... Let's imagine
a different view on this; if all openstack projects declaratively define
the workflows there APIs trigger (nova is working on task APIs, cinder is
getting there to...), and in the future when the projects are *only*
responsible for composing those workflows and handling the API inputs &
responses then the need for asyncio or other technology can move out from
the individual projects and into something else (possibly something that
is being built & used as we speak). With this kind of approach the
execution model can be an internal implementation detail of the workflow
'engine/processor' (it will also be responsible for fault-tolerant, robust
and scalable execution). If this seems reasonable, then why not focus on
integrating said thing into openstack and move the projects to a model
that is independent of eventlet, asyncio (or the next greatest thing)
instead? This seems to push the needle in the right direction and IMHO
(and hopefully others opinions) has a much bigger potential to improve the
various projects than just switching to a new underlying execution model.
* Was the heat (asyncio-like) execution model[1] examined and learned from
before considering moving to asyncio?
I will try not to put words into the heat developers mouths (I can't do it
justice anyway, hopefully they can chime in here) but I believe that heat
has a system that is very similar to asyncio and coroutines right now and
they are actively moving to a different model due to problems in part due
to using that coroutine model in heat. So if they are moving somewhat away
from that model (to a more declaratively workflow model that can be
interrupted and converged upon [2]) why would it be beneficial for other
projects to move toward the model they are moving away from (instead of
repeating the issues the heat team had with coroutines, ex, visibility
into stack/coroutine state, scale limitations, interruptibility...)?
* A side-question, how do asyncio and/or trollius support debugging, do
they support tracing individual co-routines? What about introspecting the
state a coroutine has associated with it? Eventlet at least has
http://eventlet.net/doc/modules/debug.html (which is better than nothing);
does an equivalent exist?
* What's the current thinking on avoiding the chaos (code-change-wise and
brain-power-wise) that will come from a change to asyncio?
This is the part that I really wonder about. Since asyncio isn't just a
drop-in replacement for eventlet (which hid the async part under its
*black magic*), I very much wonder how the community will respond to this
kind of mindset change (along with its new *black magic*). Will the
TC/foundation offer training, tutorials... on the change that this brings?
Should the community even care? If we say just focus on workflows & let
the workflow 'engine/processor' do the dirty work; then I believe the
community really doesn't need to care (and rightfully so) about how their
workflows get executed (either by taskflow, mistral, pigeons...). I
believe this seems like a fair assumption to make; it could even be
reinforced (I am not an expert here) with the defcore[4] work that seems
to be standardizing the integration tests that verify those workflows (and
associated APIs) act as expected in the various commercial implementations.
* Is the larger python community ready for this?
Seeing other responses for supporting libraries that aren't asyncio
compatible it doesn't inspire confidence that this path is ready to be
headed down. Partially this is due to the fact that its a completely new
programming model and alot of underlying libraries will be forced to
change to accommodate this (sqlalchemy, others listed in [5]...). Do
others feel it's appropriate to start this at this time, or does it feel
premature? Of course we have to start somewhere but I start to wonder if
effort is better spent elsewhere (see above). Along a related question,
seeing that openstack needs to support py2.x and py3.x will this mean that
trollius will be required to be used in 3.x (as it is the least common
denominator, not new syntax like 'yield from' that won't exist in 2.x).
Does this mean that libraries that will now be required to change will be
required to use trollius (the pulsar[6] framework seemed to mesh these two
nicely); is this understood by those authors? Is this the direction we
want to go down (if we stay focused on ensuring py3.x compatible, then why
not just jump to py3.x in the first place)?
Anyways just some things to think about & discuss (from an obviously
workflow-biased[7] viewpoint),
Thoughts?
-Josh
[1] https://github.com/openstack/heat/blob/master/heat/engine/scheduler.py
[2] https://review.openstack.org/#/c/95907/
[3] https://etherpad.openstack.org/p/heat-workflow-vs-convergence
[4] https://wiki.openstack.org/wiki/Governance/CoreDefinition
[5]
https://github.com/openstack/requirements/blob/master/global-requirements.t
xt
[6] http://pythonhosted.org/pulsar/
[7] http://docs.openstack.org/developer/taskflow/
-----Original Message-----
From: Mark McLoughlin <markmc at redhat.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Date: Thursday, July 3, 2014 at 8:27 AM
To: "openstack-dev at lists.openstack.org" <openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [oslo] Asyncio and oslo.messaging
>Hey
>
>This is an attempt to summarize a really useful discussion that Victor,
>Flavio and I have been having today. At the bottom are some background
>links - basically what I have open in my browser right now thinking
>through all of this.
>
>We're attempting to take baby-steps towards moving completely from
>eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>first victim.
>
>Ceilometer's code is run in response to various I/O events like REST API
>requests, RPC calls, notifications received, etc. We eventually want the
>asyncio event loop to be what schedules Ceilometer's code in response to
>these events. Right now, it is eventlet doing that.
>
>Now, because we're using eventlet, the code that is run in response to
>these events looks like synchronous code that makes a bunch of
>synchronous calls. For example, the code might do some_sync_op() and
>that will cause a context switch to a different greenthread (within the
>same native thread) where we might handle another I/O event (like a REST
>API request) while we're waiting for some_sync_op() to return:
>
> def foo(self):
> result = some_sync_op() # this may yield to another greenlet
> return do_stuff(result)
>
>Eventlet's infamous monkey patching is what make this magic happen.
>
>When we switch to asyncio's event loop, all of this code needs to be
>ported to asyncio's explicitly asynchronous approach. We might do:
>
> @asyncio.coroutine
> def foo(self):
> result = yield from some_async_op(...)
> return do_stuff(result)
>
>or:
>
> @asyncio.coroutine
> def foo(self):
> fut = Future()
> some_async_op(callback=fut.set_result)
> ...
> result = yield from fut
> return do_stuff(result)
>
>Porting from eventlet's implicit async approach to asyncio's explicit
>async API will be seriously time consuming and we need to be able to do
>it piece-by-piece.
>
>The question then becomes what do we need to do in order to port a
>single oslo.messaging RPC endpoint method in Ceilometer to asyncio's
>explicit async approach?
>
>The plan is:
>
> - we stick with eventlet; everything gets monkey patched as normal
>
> - we register the greenio event loop with asyncio - this means that
> e.g. when you schedule an asyncio coroutine, greenio runs it in a
> greenlet using eventlet's event loop
>
> - oslo.messaging will need a new variant of eventlet executor which
> knows how to dispatch an asyncio coroutine. For example:
>
> while True:
> incoming = self.listener.poll()
> method = dispatcher.get_endpoint_method(incoming)
> if asyncio.iscoroutinefunc(method):
> result = method()
> self._greenpool.spawn_n(incoming.reply, result)
> else:
> self._greenpool.spawn_n(method)
>
> it's important that even with a coroutine endpoint method, we send
> the reply in a greenthread so that the dispatch greenthread doesn't
> get blocked if the incoming.reply() call causes a greenlet context
> switch
>
> - when all of ceilometer has been ported over to asyncio coroutines,
> we can stop monkey patching, stop using greenio and switch to the
> asyncio event loop
>
> - when we make this change, we'll want a completely native asyncio
> oslo.messaging executor. Unless the oslo.messaging drivers support
> asyncio themselves, that executor will probably need a separate
> native thread to poll for messages and send replies.
>
>If you're confused, that's normal. We had to take several breaks to get
>even this far because our brains kept getting fried.
>
>HTH,
>Mark.
>
>Victor's excellent docs on asyncio and trollius:
>
> https://docs.python.org/3/library/asyncio.html
> http://trollius.readthedocs.org/
>
>Victor's proposed asyncio executor:
>
> https://review.openstack.org/70948
>
>The case for adopting asyncio in OpenStack:
>
> https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio
>
>A previous email I wrote about an asyncio executor:
>
> http://lists.openstack.org/pipermail/openstack-dev/2013-June/009934.html
>
>The mock-up of an asyncio executor I wrote:
>
>
>https://github.com/markmc/oslo-incubator/blob/8509b8b/openstack/common/mes
>saging/_executors/impl_tulip.py
>
>My blog post on async I/O and Python:
>
> http://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/
>
>greenio - greelets support for asyncio:
>
> https://github.com/1st1/greenio/
>
>
>_______________________________________________
>OpenStack-dev mailing list
>OpenStack-dev at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list