[Openstack] adding a "worker pool" notion for RPC consumers

Doug Hellmann doug.hellmann at dreamhost.com
Tue May 22 15:33:03 UTC 2012


On Tue, May 22, 2012 at 11:02 AM, Eric Windisch <eric at cloudscaling.com>wrote:

> Bringing my conversation with Doug back on-list...
>
> In nova.rpc with fanout=True every consumer gets a copy of the event
> because every consumer has its own queue. With fanout=False only one
> consumer *at all* will get a copy of the event, since they all listen on
> the same queue. The changes I made come somewhere in between that. It
> allows multiple consumers to receive a given message, but it also allows
> several consumers to declare that they are collaborating so that only one
> of the subset receives a copy. That means that multiple types of consumers
> can be listening to notifications (metering and audit logging, for example)
> and each type of consumer can have a load balanced pool of workers so that
> messages are only processed once for metering and once for logging.
>
> We can do this today with the Matchmaker. You can use a standard fanout,
> but make one of the hosts a DNS entry with multiple A or CNAME records for
> round-robin DNS, where that "host" will act as a pool of workers.  It would
> be trivial to update the matchmaker to support nested lists to support this
> with IP addresses as well, doing round-robin or random-selection of hosts
> without a pool of workers.
>

That sounds like a lot like a traditional load-balancing approach.

Unfortunately, doing this in the AMQP fashion of registering workers is
> difficult to do via the matchmaker. Not impossible, but it requires that
> the matchmakers have a (de)centralized datastore. This could be solved by
> having get_workers and/or create_consumer communicate to the matchmaker and
> update mysql, zookeeper, redis, etc.  While I think this is a viable
> approach, I've avoided /requiring/ this paradigm as the alternatives of
> using hash maps and/or DNS are significantly less complex and easier to
> scale and keep available.
>

> We should consider to what degree dynamic vs static configuration is
> necessary, if dynamic is truly required, and how a method like get_workers
> should behave on a statically configured system.
>

I wanted our ops team to be able to bring more collector service instances
online when our cloud starts seeing an increase in the sorts of activity
that generates metering events, without having to explicitly register the
new workers in a configuration file. It sounds like having the zeromq
driver (optionally?) communicate to a central registry would let it
reproduce some of the features built into AMQP to achieve that sort of
dynamic self-configuration.

I mentioned off-list that I'm not a messaging expert, and I wasn't around
when the zeromq driver work was started. Is the goal of that work to
eventually permanently replace AMQP, or just to provide a compatible
alternative?

Doug
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120522/2a2bb927/attachment.html>


More information about the Openstack mailing list