[openstack-dev] [oslo][messaging] Further improvements and refactoring

Sandy Walsh sandy.walsh at RACKSPACE.COM
Fri Jun 27 14:39:31 UTC 2014


On 6/27/2014 11:27 AM, Alexei Kornienko wrote:
Hi,

Why should we create queue in advance?

Notifications are used for communicating with downstream systems (which may or may not be online at the time). This includes dashboards, monitoring systems, billing systems, etc. They can't afford to lose these important updates. So, a queue has to exist and the events just build-up until they are eaten.

RPC doesn't need this though.

Let's consider following use cases:
1)
* listener starts and creates a queue
* publishers connect to exchange and start publishing

No need to create a queue in advance here since listener does it when it starts


Right, this is the RPC case.

2)
* publishers create a queue in advance and start publishing
....

Creation is not correct since there is no guarantee that someone would ever use this queue...


This is why notifications are turned off by default.


IMHO listener should create a queue and publishers should not care about it at all.

What do you think?


See above. There are definite use-cases where the queue has to be created in advance. But, as I say, RPC isn't one of them. So, for 90% of the AMQP traffic, we don't need this feature. We should be able to disable it for RPC in oslo.messaging.

(I say "should" because I'm not positive some aspect of openstack doesn't depend on the queue existing. Thinking about the scheduler mostly)

-S


On 06/27/2014 05:16 PM, Sandy Walsh wrote:
Something to consider is the "create the queue in advance" feature is done for notifications, so we don't drop important messages on the floor by having an Exchange with no associated Queue.

For RPC operations, this may not be required (we assume the service is available). If this check is truly a time-sink we could ignore that check for rpc calls.

-S


On 6/10/2014 9:31 AM, Alexei Kornienko wrote:
Hi,

Please find some answers inline.

Regards,
Alexei

On 06/10/2014 03:06 PM, Flavio Percoco wrote:
On 10/06/14 15:03 +0400, Dina Belova wrote:
Hello, stackers!


Oslo.messaging is future of how different OpenStack components communicate with
each other, and really I’d love to start discussion about how we can make this
library even better then it’s now and how can we refactor it make more
production-ready.


As we all remember, oslo.messaging was initially inspired to be created as a
logical continuation of nova.rpc - as a separated library, with lots of
transports supported, etc. That’s why oslo.messaging inherited not only
advantages of now did the nova.rpc work (and it were lots of them), but also
some architectural decisions that currently sometimes lead to the performance
issues (we met some of them while Ceilometer performance testing [1] during the
Icehouse).


For instance, simple testing messaging server (with connection pool and
eventlet) can process 700 messages per second. The same functionality
implemented using plain kombu (without connection pool and eventlet)  driver is
processing ten times more - 7000-8000 messages per second.


So we have the following suggestions about how we may make this process better
and quicker (and really I’d love to collect your feedback, folks):


1) Currently we have main loop running in the Executor class, and I guess it’ll
be much better to move it to the Server class, as it’ll make relationship
between the classes easier and will leave Executor only one task - process the
message and that’s it (in blocking or eventlet mode). Moreover, this will make
further refactoring much easier.

To some extent, the executors are part of the server class since the
later is the one actually controlling them. If I understood your
proposal, the server class would implement the event loop, which means
we would have an EventletServer / BlockingServer, right?

If what I said is what you meant, then I disagree. Executors keep the
eventloop isolated from other parts of the library and this is really
important for us. One of the reason is to easily support multiple
python versions - by having different event loops.

Is my assumption correct? Could you elaborate more?
No It's not how we plan it. Server will do the loop and pass received message to dispatcher and executor. It means that we would still have blocking executor and eventlet executor in the same server class. We would just change the implementation part to make it more consistent and easier to control.



2) Some of the drivers implementations (such as impl_rabbit and impl_qpid, for
instance) are full of useless separated classes that in reality might be
included to other ones. There are already some changes making the whole
structure easier [2], and after the 1st issue will be solved Dispatcher and
Listener also will be able to be refactored.

This was done on purpose. The idea was to focus on backwards
compatibility rather than cleaning up/improving the drivers. That
said, sounds like those drivers could user some clean up. However, I
think we should first extend the test suite a bit more before hacking
the existing drivers.



3) If we’ll separate RPC functionality and messaging functionality it’ll make
code base clean and easily reused.

What do you mean with this?
We mean that current drivers are written with RPC code hardcoded inside (ReplyWaiter, etc.). Thats not how messaging library is supposed to work. We can move RPC to a separate layer and this would be beneficial for both rpc (code will become more clean and less error-prone) and core messaging part (we'll be able to implement messaging in way that will work much faster).


4) Connection pool can be refactored to implement more efficient connection
reusage.

Please, elaborate. What changes do you envision?
Currently there is a class that is called ConnectionContext that is used to manage pool. Additionaly it can be accessed/configured in several other places. If we refactor it a little bit it would be much easier to use connections from the pool.

As Dims suggested, I think filing some specs for this (and keeping the
proposals separate) would help a lot in understanding what the exact
plan is.

Glad to know you're looking forward to help improving oslo.messaging.

Thanks,
Flavio

Folks, are you ok with such a plan? Alexey Kornienko already started some of
this work [2], but really we want to be sure that we chose the correct vector
of development here.


Thanks!


[1] https://docs.google.com/document/d/
1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing

[2] https://review.openstack.org/#/q/
status:open+owner:akornienko+project:openstack/oslo.messaging,n,z


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140627/ef68c7ed/attachment.html>


More information about the OpenStack-dev mailing list