[Openstack] [Quantum] Scalable agents

Gary Kotton gkotton at redhat.com
Mon Jul 16 10:30:37 UTC 2012


Hi,
The patch https://review.openstack.org/#/c/9591/ contains the initial 
support for the scalable agents (this is currently implemented on the 
linux bridge). At the moment this does not support a network or port 
update, that is, the user can set 'admin_status_up' to 0. This means 
that either the network or the port should stop handling traffic.
The network/port update is challenging in a number of respects. First 
and foremost the quantum plugin is not aware of the agent on which the 
port may have been allocated (this is where the VM has been deployed). 
In addition to this there may be a number of agents running.
There are a number of options to perform the port update. They are 
listed below:
1. Make use of the openstack-common notifier support. This would have 
the plugin notify "all" of the agents. I have yet to look at the code 
but guess that it is similar to the next item.
2. Make use of the RPC mechanism to have the plugin notify the agents. 
At the moment the plugin has the topic of all of the agents (this is 
used for a health check to ensure that the configuration on the agent is 
in sync with that of the plugin). It is described in detail in 
https://docs.google.com/document/d/1MbcBA2Os4b98ybdgAw2qe_68R1NG6KMh8zdZKgOlpvg/edit?pli=1

If I understand correctly then both of the above would require that the 
agents are also RPC consumers. In both of the above the when there is a 
update to either a network or port then there will be a lot of traffic 
broadcast on the network.

Another alternative is to piggy back onto the health check message. This 
will contain the ID's of the networks/ports that were updated prior to 
the last check. When an agent receives these, if they are using the the 
network or port then they will request the details from the plugin. This 
will certainly have less traffic on the network.

If anyone has any ideas then it would be great to hear them.
Hopefully we can discuss this in tonight's meeting.
Thanks
Gary





More information about the Openstack mailing list