[openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

Mathieu Rohon mathieu.rohon at gmail.com
Fri Aug 22 09:51:03 UTC 2014


hi,

On Wed, Aug 20, 2014 at 1:03 PM, Salvatore Orlando <sorlando at nicira.com> wrote:
> As the original thread had a completely different subject, I'm starting a
> new one here.
>
> More specifically the aim of this thread is about:
> 1) Define when a service is best implemented with a service plugin or with a
> ML2 driver
> 2) Discuss how bindings between a "core" resource and the one provided by
> the service plugin should be exposed at the management plane, implemented at
> the control plane, and if necessary also at the data plane.
>
> Some more comments inline.
>
> Salvatore
>
> On 20 August 2014 11:31, Mathieu Rohon <mathieu.rohon at gmail.com> wrote:
>>
>> Hi
>>
>> On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando <sorlando at nicira.com>
>> wrote:
>> > In the current approach QoS support is being "hardwired" into ML2.
>> >
>> > Maybe this is not the best way of doing that, as perhaps it will end up
>> > requiring every mech driver which enforces VIF configuration should
>> > support
>> > it.
>> > I see two routes. One is a mechanism driver similar to l2-pop, and then
>> > you
>> > might have a look at the proposed extension framework (and partecipate
>> > into
>> > the discussion).
>> > The other is doing a service plugin. Still, we'll have to solve how to
>> > implement the "binding" between a port/network and the QoS entity.
>>
>> We have exactly the same issue while implementing the BGPVPN service
>> plugin [1].
>> As for the Qos extension, the BGPVPN extension can extend network by
>> adding route target infos.
>> the BGPVPN data model has a foreign key to the extended network.
>>
>> If Qos is implemented as a service plugin, I assume that the
>> architecture would be similar, with Qos datamodel
>> having  foreign keys to ports and/or Networks.
>
>
> From a data model perspective, I believe so if we follow the pattern we've
> followed so far. However, I think this would be correct also if QoS is not
> implemented as a service plugin!
>
>>
>> When a port is created, and it has Qos enforcement thanks to the service
>> plugin,
>> let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
>> them back to the L2 agent.
>> We would probably need a Qos Agent which communicates with the plugin
>> through a dedicated topic.
>
>
> A distinct agent has pro and cons. I think however that we should try and
> limit the number of agents on the hosts to a minimum. And this minimum in my
> opinion should be 1! There is already a proposal around a modular agent
> which should be able of loading modules for handling distinct services. I
> think that's the best way forward.

I totally agree, and when I was referring to an agent, I was speaking
of something like the current sec group agent,
or an extension driver in the proposed modular L2 agent semantic [2]

>
>>
>>
>> But when a Qos info is updated through the Qos extension, backed with
>> the service plugin,
>> the driver that implements the Qos plugin should send the new Qos
>> enforcment to the Qos agent through the Qos topic.
>
>
> I reckon that is pretty much correct. At the end of the day, the agent which
> enforces QoS at the data plane just needs to ensure the appropriate
> configuration is in place on all ports. Whether this information is coming
> from a driver or a serivice plugin, it does not matter a lot (as long as
> it's not coming from an untrusted source, obviously). If you look at sec
> group agent module, the concept is pretty much the same.
>
>>
>> So I feel like implementing a core resource extension with a service
>> plugin needs :
>> 1 : a MD to interact with the service plugin
>> 2 : an agent and a mixin used by the the L2 agent.
>> 3 : a dedicated topic used by the MD and the driver of the service
>> plugin to communicate with the new agent
>>
>> Am I wrong?
>
>
> There is nothing wrong with that. Nevertheless, the fact that we need a Mech
> driver _and_ a service plugin probably also implies that the service plugin
> at the end of the day has not succeeded in its goal of being orthogonal.
> I think it's worth try and exploring solutions which will allow us to
> completely decouple the service plugin for the core functionality, and
> therefore completely contain QoS management within its service plugin. If
> you too think this is not risible, I can perhaps put together something to
> validate this idea.

It doesn't seems risible to me at all. I feel quite uncomfortable to
have to create a MD
to deal with core resource modifications, when those core resources
are extended with a service plugin.
I have proposed a patch [3] to workaround writing a dedicated MD.
The goal of this patch was to add extension's informations in
get_device_details(),
by adding get_"resource"() generated dict in the dict returned to the agent.
The modular agent should dispatch the dict to the extensions drivers
of the modular agent.
But I'm not keen on this method too because extensions driver can
receive info from two channels :
1. the ML2 plugin which communicates on the plugin/agent topics,
through get_device_details()/update_port()
2. the service plugin which communicates on a dedicated topic to the
dedicated agent

I think we need a hook in the ML2 plugin to notify service plugin
which extends core resources.
May be an orthogonal MD dedicated to service plugin notification could
be suitable.

>
>>
>>
>> [1]https://review.openstack.org/#/c/93329/
[2]https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg25687.html
[3]https://review.openstack.org/#/c/96181/

>>
>>
>> > If we go
>> > for the approach we've chosen so far the resource extension model you
>> > still
>> > have to deal with ML2 extensions. But I like orthogonality in services,
>> > and
>> > QoS is a service to me.
>> > Another arguable point is that we might want to reconsider our
>> > abuse^H^H^H^H^H use of resource attribute extension, but this is a story
>> > for
>> > a different thread.
>> >
>> > Regarding the incubator request, I think we need to wait for the process
>> > to
>> > be "blessed". But you have my support and I would happy to help to
>> > assist
>> > with this work item through its process towards graduation.
>> >
>> > This obviously provided the QoS team wants me to do that!
>> >
>> > Salvatore
>> >
>> >
>> > On 19 August 2014 23:15, Alan Kavanagh <alan.kavanagh at ericsson.com>
>> > wrote:
>> >>
>> >> +1, I am hoping this is just a short term holding point and this will
>> >> eventually be merged into main branch as this is a feature a lot of
>> >> companies, us included would definitely benefit from having supported
>> >> and
>> >> many thanks to Sean for sticking with this and continue to push this.
>> >> /Alan
>> >>
>> >> -----Original Message-----
>> >> From: Collins, Sean [mailto:Sean_Collins2 at cable.comcast.com]
>> >> Sent: August-19-14 8:33 PM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: [openstack-dev] [Neutron][QoS] Request to be considered for
>> >> neutron-incubator
>> >>
>> >> Hi,
>> >>
>> >> The QoS API extension has lived in Gerrit/been in review for about a
>> >> year.
>> >> It's gone through revisions, summit design sessions, and for a little
>> >> while,
>> >> a subteam.
>> >>
>> >> I would like to request incubation in the upcoming incubator, so that
>> >> the
>> >> code will have a more permanent "home" where we can collaborate and
>> >> improve.
>> >> --
>> >> Sean M. Collins
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list