[Openstack] Queue Service Implementation Thoughts
eday at oddments.org
Sat Mar 5 23:07:09 UTC 2011
When deciding to move forward with Erlang, I first tried out the Erlang
REST framework webmachine (it is built on top of mochiweb and used
by projects like Riak). After some performance testing, I decided to
write a simple wrapper over the HTTP packet parsing built into Erlang
(also used by mochiweb/webmachine) to see if I could make things a
bit more efficient. Here are the results:
Erlang (2 threads)
echo - 58823 reqs/sec
webmachine - 7782 reqs/sec
openstack - 24154 reqs/sec
The test consists of four concurrent connections focused on packet
parsing speed and framework overhead. A simple echo test was also
done for a baseline (no parsing, just a simple recv/send loop). As
you can see, the simple request/response wrapper I wrote did get some
gains, although it's a little more hands-on to use (looks more like
wsgi+webob in python).
I decided to run the same tests against Python just for comparison. I
ran echo, wsgi, and wsgi+webob decorators all using eventlet. I ran
both single process and two process in order to compare with Erlang
which was running with two threads.
echo (1 proc) - 17857 reqs/sec
echo (2 proc) - 52631 reqs/sec
wsgi (1 proc) - 4859 reqs/sec
wsgi (2 proc) - 8695 reqs/sec
wsgi webob (1 proc) - 3430 reqs/sec
wsgi webob (2 proc) - 6142 reqs/sec
As you can see, the two process Python echo server was not too far
behind the two thread Erlang echo server. The wsgi overhead was
significant, especially with the webob decorators/objects. It was
still on par with webmachine, but a factor of three less than my
simple request/response wrapper.
A multi-process python server does have the drawback of not being
able to share resources between processes unless incurring the
overhead of IPC. When thinking about a horizontally scalable service,
where scaling-out is much more important than scaling-up, I think
this becomes much less of a factor. Regardless of language choice,
we will need a proxy to efficiently hash to a set of queue servers in
any large deployment (or the clients will hash), but if that set is a
larger number of single-process python servers (some running on the
same machine) instead of a smaller number of multi-threaded Erlang
servers, I don't think it will make too much of a difference (each
proxy server will need to maintain more connections). In previous
queue service threads I was much more concerned about this and was
leaning away from Python, but I think I may be coming around.
Another aspect I took a look at is options for message storage. For
the fast, in-memory, unreliable queue type, here are some numbers
for options in Python and Erlang:
Raw message = key(16) + ttl(8) + hide(8) + body(100) = 132 bytes
Python list/dict - 248 bytes/msg (88% overhead)
Python sqlite3 - 168 bytes/msg (27% overhead)
Erlang ets - 300 bytes/msg (127% overhead)
The example raw message has no surrounding data structure, so it is
obviously never possible to get down to 132 bytes. As the body grows,
the overhead becomes less significant since they all grow the same
amount. The best Python option is probably an in-memory sqlite table,
which is also an option for disk-based storage as well.
For Erlang, ets is really the only efficient in-memory option (mnesia
is built on ets if you're thinking of that), and also has a disk
counterpart called dets. The overhead was definitely more than I was
expecting and is less memory efficient than both Python options.
As we start looking at other stores to use, there are certainly more
DB drivers available for Python than Erlang (due to the fact that
Python is more popular). We'll want to push most of the heavy lifting
to the pluggable databases, which makes the binding language less of
a concern as well.
So, in conclusion, and going against my previous opinion, I'm starting
to feel that the performance gains of Erlang are really not that
significant compared to Python for this style of application. If
we're talking about a factor of three (and possibly less if we can
optimize the wsgi driver or not use wsgi), and consider the database
driver options for queue storage, Python doesn't look so bad. We'll
certainly have more of a developer community too.
We may still need to write parts in C/C++ if limits can't be overcome,
but that would probably be the case for Erlang or Python.
What do folks think?
More information about the Openstack