[openstack-dev] [Openstack] [Quantum] Scalable agents

Salvatore Orlando sorlando at nicira.com
Mon Jul 23 08:30:22 UTC 2012


On 23 July 2012 09:02, Dan Wendlandt <dan at nicira.com> wrote:

>
>
> On Sun, Jul 22, 2012 at 5:51 AM, Gary Kotton <gkotton at redhat.com> wrote:
>
>> **
>>
>>
>> This is an interesting idea. In addition to the creation we will also
>> need the update. I would prefer that the agents would have one topic - that
>> is for all updates. When an agent connects to the plugin it will register
>> the type of operations that are supported on the specific agent. The agent
>> operations can be specific as bit masks.
>>
>> I have implemented something similar in
>> https://review.openstack.org/#/c/9591
>>
>> This can certainly be improved and optimized. What are your thoughts?
>>
>
> Based on your follow-up emails, I think we're now thinking similarly about
> this.  Just to be clear though, for updates I was talking about a different
> topic for each entity that has its own UUID (e.g., topic
> port-update-f01c8dcb-d9c1-4bd6-9101-1924790b4b45)
>


>From my limited experience with RPC, I have never seen per-object topics as
we are proposing here. Nevertheless, I think they're a good idea and I am
not aware of any reasons for which this should impact the scalability of
the underlying message queue.


>
>>
>> In addition to this we have a number of issues where the plugin does not
>> expose the information via the standard API's - for example the VLAN tag
>> (this is being addressed via extensions in the provider networks feature)
>>
>
> Agreed.  There are a couple options here: direct DB access (no polling,
> just direct fetching), admin API extensions, or custom RPC calls.  Each has
> pluses and minuses.  Perhaps my real goal here would be better described as
> "if there's an existing plugin agnostic way to doing X, our strong bias
> should be to use it until presented with concrete  evidence to the
> contrary".   For example, should a DHCP client create a port for the DHCP
> server via the standard API, or via a custom API or direct DB access?  My
> strong bias would be toward using the standard API.
>

I totally agree with this approach. Should we be presented with an
"evidence of the contrary", I would then use API extensions first and then,
only if necessary, custom RPC calls. If we end up in a situation where we
feel we need direct DB access, I would say we are in a very bad place and
need to back to the drawing board!


>
>
>> 3. Logging. At the moment the agents do not have a decent logging
>> mechanism. This makes debugging the RPC code terribly difficult. This was
>> scheduled for F-3. I'll be happy to add this if there are no objections.
>>
>
> That sounds valuable.
>
>
>> 4. We need to discuss the notifications that Yong added and how these two
>> methods can interact together. More specifically I think that we need to
>> address the configuration files.
>>
>
> Agreed.  I think we need to decide on this at monday's IRC meeting, so we
> can move forward.  Given F-3 deadlines, I'm well aware that I'll have to be
> pragmatic here :)
>

I believe Yong stated in a different thread (or in the code review
discussion) that his notification mechanism was trying to address a
somewhat different use case. Given the looming deadline, I would probably
discuss in today's (or tomorrow's for the non-Euro netstacker's)  meeting
whether there is any major reason for which both patches cannot live
together and then proceed to merge both. When planning Grizzly we can then
look back at them and see if and how these mechanisms could be merged.


>
>>
>> The RPC code requires that the eventlet monkey patch be set. This cause
>> havoc when I was using the events from pyudev for new device creation. At
>> the moment I have moved the event driven support to polling (if anyone who
>> reads this is familiar with the issue or has an idea on how to address it
>> any help will be great)
>>
>
> Sorry, wish I could help, but I'm probably in the same boat as you on this
> one.
>

I am afraid I cannot be of great help too. But there's a high chance
nova+libvirt developers already faced and solved this issue.


>
> I'm going to make sure we have a good chunk of time to discuss this during
> the IRC meeting on monday (sorry, I know that's late night for you...).
>
> Dan
>
>
>
>
>>
>> Thanks
>> Gary
>>
>>  Dan
>>
>>
>>
>>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Dan Wendlandt
>> Nicira, Inc: www.nicira.com
>> twitter: danwendlandt
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>>
>>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20120723/2e91ba07/attachment.html>


More information about the OpenStack-dev mailing list