[openstack-dev] [Openstack] [Quantum] Scalable agents

Gary Kotton gkotton at redhat.com
Mon Jul 23 06:57:39 UTC 2012


On 07/22/2012 03:51 PM, Gary Kotton wrote:
> On 07/19/2012 07:11 PM, Dan Wendlandt wrote:
>>
>>
>> On Wed, Jul 18, 2012 at 5:17 AM, Gary Kotton <gkotton at redhat.com 
>> <mailto:gkotton at redhat.com>> wrote:
>>
>>     On 07/18/2012 04:23 AM, Dan Wendlandt wrote:
>>>
>>
>> Hi Gary,
>>
>> Removing much of the thread history, as I think we agree on the 
>> high-level goals.  Now just focusing on the differences.
>>
>>
>>
>>>     For example, a DHCP agent handling all DHCP for a deployment
>>>     might register for create/update/delete operations on subnets +
>>>     ports, whereas a plugin agent might only register for updates
>>>     from the ports that it sees locally on the hypervisor.
>>>      Conceptually, you could think of there being a 'topic' per port
>>>     in this case, though we may need to implement it differently in
>>>     practice.
>>
>>     The agent ID is currently stored in the database (this is for the
>>     configuration sync mechanism). I think that adding an extra
>>     column indicating the capabilities enables the service to notify
>>     the agents. The issue is how refined can the updates be - we want
>>     to ensure that we have a scalable architecture.
>>
>>
>> I think either we can implement the filtering ourselves using a 
>> mechanism like this, or we can rely on the message bus to do it for 
>> us.  I'm not really familiar with the scalability of various message 
>> bus implementations, but a simple model would be that there's a topic 
>> for:
>> - port creation
>> - net creation
>> - subnet creation
>
> This is an interesting idea. In addition to the creation we will also 
> need the update. I would prefer that the agents would have one topic - 
> that is for all updates. When an agent connects to the plugin it will 
> register the type of operations that are supported on the specific 
> agent. The agent operations can be specific as bit masks.

I have given this additional thought. One of the problems with the 
approach that I have suggested is that the plugin/service will have to 
send n updates instead of 1. I am going to try what you suggested - it 
is a minor tweak to the code.

>
> I have implemented something similar in 
> https://review.openstack.org/#/c/9591
>
> This can certainly be improved and optimized. What are your thoughts?
>
> In addition to this we have a number of issues where the plugin does 
> not expose the information via the standard API's - for example the 
> VLAN tag (this is being addressed via extensions in the provider 
> networks feature)
>
> There are a number of things that we need to address:
> 1. Support for different plugins - if acceptable then the model above 
> needs to be more generic and a common interface should be defined.
> 2. Support for different agents. This is pretty simple - for example 
> the DHCP agent. It has to do the following:
>     i. use the health check mechanism (this registers the mask for the 
> notification updates)
>     ii. add in support for port creation (I guess that I can add this 
> as part of this patch)
> 3. Logging. At the moment the agents do not have a decent logging 
> mechanism. This makes debugging the RPC code terribly difficult. This 
> was scheduled for F-3. I'll be happy to add this if there are no 
> objections.
> 4. We need to discuss the notifications that Yong added and how these 
> two methods can interact together. More specifically I think that we 
> need to address the configuration files.
>
> The RPC code requires that the eventlet monkey patch be set. This 
> cause havoc when I was using the events from pyudev for new device 
> creation. At the moment I have moved the event driven support to 
> polling (if anyone who reads this is familiar with the issue or has an 
> idea on how to address it any help will be great)
>
>> and a specific topic for each entity after its created to learn about 
>> updates and deletes.
>
> I prefer having a cast to a specific topic than a broadcast all. 
> (please look at 
> https://review.openstack.org/#/c/9591/3/quantum/plugins/linuxbridge/lb_quantum_plugin.py 
> - method update_port - line 174).
>
>>
>> as I said, we may need to implement this logic ourselves is using 
>> many such topics would not be scalable, but this seems like the kind 
>> of think a message bus should be good at..
>>
>>>     In general, I think it is ideal if these external agents can use
>>>     standard mechanisms and formats as much as possible.  For
>>>     example, after learning that port X was created, the DHCP agent
>>>     can actually use a standard webservice GET to learn about the
>>>     configuration of the port (or if people feel that such
>>>     information should be included in the notification itself, this
>>>     notification data uses the same format as the webservice API).
>>
>>     I am not sure that I agree here. If the service is notifying the
>>     agent then why not have the information being passed in the
>>     message (IP + mac etc.) There is no need for the GET operation.
>>
>>
>> My general bias here is that if there are now two ways to fetch every 
>> type of information (one via the standard "public" interface and 
>> another via the "internal" interface with a different implementation) 
>> that is twice the testing, updating, documenting that we have to do. 
>>  Perhaps the two problems we're trying to solve are sufficiently 
>> different that they require two different mechanisms, but in my use 
>> cases I haven't seen that yet.
>
> This is a tough one. On one hand I agree with you. On the other I 
> think that we should have a better tuned and optimized system. Yes, 
> this may require a bit more effort but I think that it is more robust. 
> Another issue is that each plugin has its own traits and 
> characteristics. Private additional data may have to be transferred.
>
> Thanks
> Gary
>> Dan
>>
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Dan Wendlandt
>> Nicira, Inc: www.nicira.com <http://www.nicira.com>
>> twitter: danwendlandt
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20120723/40159fcd/attachment-0001.html>


More information about the OpenStack-dev mailing list