[openstack-dev] Asynchrounous programming: replace eventlet with asyncio

Yuriy Taraday yorik.sar at gmail.com
Thu Feb 6 18:06:13 UTC 2014


Hello.


On Tue, Feb 4, 2014 at 5:38 PM, victor stinner
<victor.stinner at enovance.com>wrote:

> I would like to replace eventlet with asyncio in OpenStack for the
> asynchronous programming. The new asyncio module has a better design and is
> less "magical". It is now part of python 3.4 arguably becoming the de-facto
> standard for asynchronous programming in Python world.
>

I think that before doing this big move to yet another asynchronous
framework we should ask the main question: Do we need it? Why do we
actually need async framework inside our code?
There most likely is some historical reason why (almost) every OpenStack
project runs every its process with eventlet hub, but I think we should
reconsider this now when it's clear that we can't go forward with eventlet
(because of py3k mostly) and we're going to put considerable amount of
resources into switching to another async framework.

Let's take Nova for example.

There are two kinds of processes there: nova-api and others.

- nova-api process forks to a number of workers listening on one socket and
running a single greenthread for each incoming request;
- other services (workers) constantly poll some queue and spawn a
greenthread for each incoming request.

Both kinds to basically the same job: receive a request, run a handler in a
greenthread. Sounds very much like a job for some application server that
does just that and does it good.
If we remove all dependencies from eventlet or any other async framework,
we would not only be able to write Python code without need to keep in mind
that we're running in some reactor (that's why eventlet was chosen over
Twisted IIRC), but we can also forget about all these frameworks altogether.

I suggest approach like this:
- for API services use dead-simple threaded WSGI server (we have one in the
stdlib by the way - in wsgiref);
- for workers use simple threading-based oslo.messaging loop (it's on its
way).

Of course, it won't be production-ready. Dumb threaded approach won't scale
but we don't have to write our own scaling here. There are other tools
around to do this: Apache httpd, Gunicorn, uWSGI, etc. And they will work
better in production environment than any code we write because they are
proven with time and on huge scales.

So once we want to go to production, we can deploy things this way for
example:
- API services can be deployed within Apache server or any other HTTP
server with WSGI backend (Keystone already can be deployed within Apache);
- workers can be deployed in any non-HTTP application server, uWSGI is a
great example of one that can work in this mode.

With this approach we can leave the burden of process management, load
balancing, etc. to the services that are really good at it.

What do you think about this?

-- 

Kind regards, Yuriy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140206/12f3ed91/attachment.html>


More information about the OpenStack-dev mailing list