[openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues
Eric Windisch
eric at windisch.us
Fri Mar 6 04:21:54 UTC 2015
On Wed, Mar 4, 2015 at 12:10 PM, ozamiatin <ozamiatin at mirantis.com> wrote:
> Hi,
>
> By this e-mail I'd like to start a discussion about current zmq driver
> internal design problems I've found out.
> I wish to collect here all proposals and known issues. I hope this
> discussion will be continued on Liberty design summit.
> And hope it will drive our further zmq driver development efforts.
>
> ZMQ Driver issues list (I address all issues with # and references are in
> []):
>
> 1. ZMQContext per socket (blocker is neutron improper usage of messaging
> via fork) [3]
> 2. Too many different contexts.
> We have InternalContext used for ZmqProxy, RPCContext used in
> ZmqReactor, and ZmqListener.
> There is also zmq.Context which is zmq API entity. We need to consider
> a possibility to unify their usage over inheritance (maybe stick to
> RPCContext)
> or to hide them as internal entities in their modules (see refactoring
> #6)
>
The code, when I abandoned it, was moving toward fixing these issues, but
for backwards compatibility was doing so in a staged fashion across the
stable releases.
I agree it's pretty bad. Fixing this now, with the driver in a less stable
state should be easier, as maintaining compatibility is of less importance.
> 3. Topic related code everywhere. We have no topic entity. It is all
> string operations.
> We need some topics management entity and topic itself as an entity
> (not a string).
> It causes issues like [4], [5]. (I'm already working on it).
> There was a spec related [7].
>
Good! It's ugly. I had proposed a patch at one point, but I believe the
decision was that it was better and cleaner to move toward the
oslo.messaging abstraction as we solve the topic issue. Now that
oslo.messaging exists, I agree it's well past time to fix this particular
ugliness.
> 4. Manual implementation of messaging patterns.
> Now we can observe poor usage of zmq features in zmq driver. Almost
> everything is implemented over PUSH/PULL.
>
> 4.1 Manual polling - use zmq.Poller (listening and replying for
> multiple sockets)
> 4.2 Manual request/reply implementation for call [1].
> Using of REQ/REP (ROUTER/DEALER) socket solves many issues. A lot
> of code may be reduced.
> 4.3 Timeouts waiting
>
There are very specific reasons for the use of PUSH/PULL. I'm firmly of the
belief that it's the only viable solution for an OpenStack RPC driver. This
has to do with how asynchronous programming in Python is performed, with
how edge-triggered versus level-triggered events are processed, and general
state management for REQ/REP sockets.
I could be proven wrong, but I burned quite a bit of time in the beginning
of the ZMQ effort looking at REQ/REP before realizing that PUSH/PULL was
the only reasonable solution. Granted, this was over 3 years ago, so I
would not be too surprised if my assumptions are no longer valid.
> 5. Add possibility to work without eventlet [2]. #4.1 is also related
> here, we can reuse many of the implemented solutions
> like zmq.Poller over asynchronous sockets in one separate thread
> (instead of spawning on each new socket).
> I will update the spec [2] on that.
>
Great. This was one of the motivations behind oslo.messaging and it would
be great to see this come to fruition.
> 6. Put all zmq driver related stuff (matchmakers, most classes from
> zmq_impl) into a separate package.
> Don't keep all classes (ZmqClient, ZmqProxy, Topics management,
> ZmqListener, ZmqSocket, ZmqReactor)
> in one impl_zmq.py module.
>
Seems fine. In fact, I think a lot of code could be shared with an AMQP v1
driver...
> 7. Need more technical documentation on the driver like [6].
> I'm willing to prepare a current driver architecture overview with some
> graphics UML charts, and to continue discuss the driver architecture.
>
Documentation has always been a sore point. +2
--
Regards,
Eric Windisch
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150305/c1d87d9a/attachment.html>
More information about the OpenStack-dev
mailing list