[openstack-dev] [networking-ovn] metadata agent implementation
Miguel Angel Ajo Pelayo
majopela at redhat.com
Mon May 8 08:15:55 UTC 2017
On Mon, May 8, 2017 at 2:48 AM, Michael Still <mikal at stillhq.com> wrote:
> It would be interesting for this to be built in a way where other
> endpoints could be added to the list that have extra headers added to them.
> For example, we could end up with something quite similar to EC2 IAMS if
> we could add headers on the way through for requests to OpenStack endpoints.
> Do you think the design your proposing will be extensible like that?
I believe we may focus on achieving parity with the neutron reference
implementation first, and later on what you're proposing probably needs to
modelled on the neutron side.
Could you provide a practical example of how that would work anyway?
> On Fri, May 5, 2017 at 10:07 PM, Daniel Alvarez Sanchez <
> dalvarez at redhat.com> wrote:
>> Hi folks,
>> Now that it looks like the metadata proposal is more refined , I'd like
>> to get some feedback from you on the driver implementation.
>> The ovn-metadata-agent in networking-ovn will be responsible for
>> creating the namespaces, spawning haproxies and so on. But also,
>> it must implement most of the "old" neutron-metadata-agent functionality
>> which listens on a UNIX socket and receives requests from haproxy,
>> adds some headers and forwards them to Nova. This means that we can
>> import/reuse big part of neutron code.
>> Makes sense, you would avoid this way, depending on an extra co-hosted
service, reducing this way deployment complexity.
> I wonder what you guys think about depending on neutron tree for the
>> agent implementation despite we can benefit from a lot of code reuse.
>> On the other hand, if we want to get rid of this dependency, we could
>> probably write the agent "from scratch" in C (what about having C
>> code in the networking-ovn repo?) and, at the same time, it should
>> buy us a performance boost (probably not very noticeable since it'll
>> respond to requests from local VMs involving a few lookups and
>> processing simple HTTP requests; talking to nova would take most
>> of the time and this only happens at boot time).
I would try to keep that part in Python, as everything on the networking-ovn
repo. I remember that Jakub made lots of improvements on the
neutron-metadata-agent area by caching, I'd make sure we reuse that if
it's of use to us (not sure if we used it for nova communication or not).
The neutron metadata agent, apparently has a get_ports RPC call  to
neutron-server plugin. We don't want RPC calls but ovsdb to get that info,
I have vague proof about caching also being used for those requests ,
but with ovsdb we have that for free.
I don't know, the agent is 300 LOC, it seems to me like a whole re-write in
python (copying whatever is necessary) could be a reasonable way, but I
guess that trying to go down that rabbit hole would tell you better if I'm
wrong or if it makes sense.
>> I would probably aim for a Python implementation
>> code from neutron tree but I'm not sure how we want to deal with
>> changes in neutron codebase (we're actually importing code now).
>> Looking forward to reading your thoughts :)
I guess the neutron-ns-metadata haproxy spawning  can be reused
from neutron, I wonder if it would make sense to move that to neutron_lib?
I believe that's the key thing that can be reused,
if we don't reuse it: we need to maintain it in two places,
if we reuse it, we can be broken by changes in neutron repo,
but I'm sure we're flexible enough to react to such changes,
>>  https://review.openstack.org/#/c/452811/
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
> Rackspace Australia
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev