[Openstack] [Quantum] Scalable agents

Dan Wendlandt dan at nicira.com
Wed Jul 18 01:23:37 UTC 2012


On Mon, Jul 16, 2012 at 3:30 AM, Gary Kotton <gkotton at redhat.com> wrote:

> Hi,
> The patch https://review.openstack.org/#**/c/9591/<https://review.openstack.org/#/c/9591/>contains the initial support for the scalable agents (this is currently
> implemented on the linux bridge). At the moment this does not support a
> network or port update, that is, the user can set 'admin_status_up' to 0.
> This means that either the network or the port should stop handling traffic.
> The network/port update is challenging in a number of respects. First and
> foremost the quantum plugin is not aware of the agent on which the port may
> have been allocated (this is where the VM has been deployed). In addition
> to this there may be a number of agents running.
> There are a number of options to perform the port update. They are listed
> below:
> 1. Make use of the openstack-common notifier support. This would have the
> plugin notify "all" of the agents. I have yet to look at the code but guess
> that it is similar to the next item.
> 2. Make use of the RPC mechanism to have the plugin notify the agents. At
> the moment the plugin has the topic of all of the agents (this is used for
> a health check to ensure that the configuration on the agent is in sync
> with that of the plugin). It is described in detail in
> https://docs.google.com/**document/d/**1MbcBA2Os4b98ybdgAw2qe_**
> 68R1NG6KMh8zdZKgOlpvg/edit?**pli=1<https://docs.google.com/document/d/1MbcBA2Os4b98ybdgAw2qe_68R1NG6KMh8zdZKgOlpvg/edit?pli=1>
>
> If I understand correctly then both of the above would require that the
> agents are also RPC consumers. In both of the above the when there is a
> update to either a network or port then there will be a lot of traffic
> broadcast on the network.
>

Hi Gary,

Yes, I think either way, to eliminate the polling, we need to have some
mechanism to inform the agents that they need to update state.  My goal
would be to build a standard mechanism for this that to the degree possible
leverages existing APIs and data formats, so that we can avoid having
multiple formats for the same data and avoid any RPC-call sprawl.

I agree that we don't want to broadcast all data everyone.  At the same
time, I'd like to avoid having to make the the core plugin code running
within quantum-server be aware of all of the different agents.  What I
think would be idea is that we have a fine-grained notification mechanism
for when objects (networks, subnets, ports) are updated, and that agents
could choose to register for updates on particular objects.  For example, a
DHCP agent handling all DHCP for a deployment might register for
create/update/delete operations on subnets + ports, whereas a plugin agent
might only register for updates from the ports that it sees locally on the
hypervisor.  Conceptually, you could think of there being a 'topic' per
port in this case, though we may need to implement it differently in
practice.

In general, I think it is ideal if these external agents can use standard
mechanisms and formats as much as possible.  For example, after learning
that port X was created, the DHCP agent can actually use a standard
webservice GET to learn about the configuration of the port (or if people
feel that such information should be included in the notification itself,
this notification data uses the same format as the webservice API).

So in sum, I'm hoping that we can take an approach to this problem that
build a base framework that will continue to work as we add more rich
functionality to quantum networks, recognizing that in most cases, agents
will need to follow the pattern of triggering off of changes to API
objects.  I'm not sure whether this is inline with your thinking or not, so
I'd be curious to hear your thoughts. Thanks,

Dan



>
> Another alternative is to piggy back onto the health check message. This
> will contain the ID's of the networks/ports that were updated prior to the
> last check. When an agent receives these, if they are using the the network
> or port then they will request the details from the plugin. This will
> certainly have less traffic on the network.
>
> If anyone has any ideas then it would be great to hear them.
> Hopefully we can discuss this in tonight's meeting.
> Thanks
> Gary
>
>
> ______________________________**_________________
> Mailing list: https://launchpad.net/~**openstack<https://launchpad.net/~openstack>
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~**openstack<https://launchpad.net/~openstack>
> More help   : https://help.launchpad.net/**ListHelp<https://help.launchpad.net/ListHelp>
>



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120717/f3b13da1/attachment.html>


More information about the Openstack mailing list