[openstack-dev] [oslo.messaging] [zeromq] nova-rpc-zmq-receiver bottleneck

yatin kumbhare yatinkumbhare at gmail.com
Wed Mar 19 09:18:46 UTC 2014


Hi Mike,

Thanks for your feedback.

I'm not aware on the details of ceilometer messaging.

Would please point out on "messaging behaviors that are desirable for
ceilometer currently and possibly other things in the future"?

This will help me in evaluating my idea further.

Regards,
Yatin


On Sat, Mar 15, 2014 at 12:15 AM, Mike Wilson <geekinutah at gmail.com> wrote:

> Hi Yatin,
>
> I'm glad you are thinking about the drawbacks of the zmq-receiver causes,
> I want to give you a reason to keep the zmq-receiver and get your feedback.
> The way I think about the zmq-receiver is a tiny little mini-broker that
> exists separate from any other OpenStack service. As such, it's
> implementation can be augmented to support store-and-forward and possibly
> other messaging behaviors that are desirable for ceilometer currently and
> possibly other things in the future. Integrating the receiver into each
> service is going to remove its independence and black box nature and give
> it all the bugs and quirks of any project it gets lumped in with. I would
> prefer that we continue to improve zmq-receiver to overcome the tough
> parts. Either that or find a good replacement and use that. An example of a
> possible replacement might be the qpid dispatch router[1], although this
> guy explicitly wants to avoid any store and forward behaviors. Of course,
> dispatch router is going to be tied to qpid, I just wanted to give an
> example of something with similar functionality.
>
> -Mike
>
>
> On Thu, Mar 13, 2014 at 11:36 AM, yatin kumbhare <yatinkumbhare at gmail.com>wrote:
>
>> Hello Folks,
>>
>> When zeromq is use as rpc-backend, "nova-rpc-zmq-receiver" service needs
>> to be run on every node.
>>
>> zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
>> based on topic-name (which is extracted from received data), it forwards
>> data to respective local services, over IPC protocol.
>>
>> While, openstack services, listen/bind on "IPC" socket with socket-type
>> PULL.
>>
>> I see, zmq-receiver as a bottleneck and overhead as per the current
>> design.
>> 1. if this service crashes: communication lost.
>> 2. overhead of running this extra service on every nodes, which just
>> forward messages as is.
>>
>>
>> I'm looking forward to, remove zmq-receiver service and enable direct
>> communication (nova-* and cinder-*) across and within node.
>>
>> I believe, this will create, zmq experience more seamless.
>>
>> the communication will change from IPC to zmq TCP socket type for each
>> service.
>>
>> like: rpc.cast from scheduler -to - compute would be direct rpc message
>> passing. no routing through zmq-receiver.
>>
>> Now, TCP protocol, all services will bind to unique port (port-range
>> could be, 9501-9510)
>>
>> from nova.conf, rpc_zmq_matchmaker =
>> nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing.
>>
>> I have put arbitrary ports numbers after the service name.
>>
>> file:///etc/oslo/matchmaker_ring.json
>>
>>     {
>>      "cert:9507": [
>>          "controller"
>>      ],
>>      "cinder-scheduler:9508": [
>>          "controller"
>>      ],
>>      "cinder-volume:9509": [
>>          "controller"
>>      ],
>>      "compute:9501": [
>>          "controller","computenodex"
>>      ],
>>      "conductor:9502": [
>>          "controller"
>>      ],
>>      "consoleauth:9503": [
>>          "controller"
>>      ],
>>      "network:9504": [
>>          "controller","computenodex"
>>      ],
>>      "scheduler:9506": [
>>          "controller"
>>      ],
>>      "zmq_replies:9510": [
>>          "controller","computenodex"
>>      ]
>>  }
>>
>> Here, the json file would keep track of ports for each services.
>>
>> Looking forward to seek community feedback on this idea.
>>
>>
>> Regards,
>> Yatin
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140319/06bc18c8/attachment.html>


More information about the OpenStack-dev mailing list