[openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)
Salvatore Orlando
sorlando at nicira.com
Wed Aug 20 12:12:56 UTC 2014
I was merely suggesting that this is the approach that we've followed so
far.
The incubator is not even a real thing at the moment, so it's still too
early to make any statement as we do not even know if the stuff in the
incubator will use the same database or not. This is a necessary discussion
to have, but until the nature of the incubator is not defined we would be
just speculating.
For the time being, I'd just note that "foreign key" here might just be
interpreted as "reference between two entities whose integrity will be
guaranteed by a DBMS or other mechanism"
Salvatore
On 20 August 2014 13:19, Kevin Benton <blak111 at gmail.com> wrote:
> >From a data model perspective, I believe so if we follow the pattern
> we've followed so far.
>
> How will database setup work in this case? IIRC, the auto-generation of
> schema was just disabled in a recent merge. Will we have a big pile of
> various migration scripts that users will need to pick from depending on
> which services he/she wants to use from the various neutron incubated
> projects?
>
>
> On Wed, Aug 20, 2014 at 4:03 AM, Salvatore Orlando <sorlando at nicira.com>
> wrote:
>
>> As the original thread had a completely different subject, I'm starting a
>> new one here.
>>
>> More specifically the aim of this thread is about:
>> 1) Define when a service is best implemented with a service plugin or
>> with a ML2 driver
>> 2) Discuss how bindings between a "core" resource and the one provided by
>> the service plugin should be exposed at the management plane, implemented
>> at the control plane, and if necessary also at the data plane.
>>
>> Some more comments inline.
>>
>> Salvatore
>>
>> On 20 August 2014 11:31, Mathieu Rohon <mathieu.rohon at gmail.com> wrote:
>>
>>> Hi
>>>
>>> On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando <sorlando at nicira.com>
>>> wrote:
>>> > In the current approach QoS support is being "hardwired" into ML2.
>>> >
>>> > Maybe this is not the best way of doing that, as perhaps it will end up
>>> > requiring every mech driver which enforces VIF configuration should
>>> support
>>> > it.
>>> > I see two routes. One is a mechanism driver similar to l2-pop, and
>>> then you
>>> > might have a look at the proposed extension framework (and partecipate
>>> into
>>> > the discussion).
>>> > The other is doing a service plugin. Still, we'll have to solve how to
>>> > implement the "binding" between a port/network and the QoS entity.
>>>
>>> We have exactly the same issue while implementing the BGPVPN service
>>> plugin [1].
>>> As for the Qos extension, the BGPVPN extension can extend network by
>>> adding route target infos.
>>> the BGPVPN data model has a foreign key to the extended network.
>>>
>>> If Qos is implemented as a service plugin, I assume that the
>>> architecture would be similar, with Qos datamodel
>>> having foreign keys to ports and/or Networks.
>>>
>>
>> From a data model perspective, I believe so if we follow the pattern
>> we've followed so far. However, I think this would be correct also if QoS
>> is not implemented as a service plugin!
>>
>>
>>> When a port is created, and it has Qos enforcement thanks to the service
>>> plugin,
>>> let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
>>> them back to the L2 agent.
>>> We would probably need a Qos Agent which communicates with the plugin
>>> through a dedicated topic.
>>>
>>
>> A distinct agent has pro and cons. I think however that we should try and
>> limit the number of agents on the hosts to a minimum. And this minimum in
>> my opinion should be 1! There is already a proposal around a modular agent
>> which should be able of loading modules for handling distinct services. I
>> think that's the best way forward.
>>
>>
>>>
>>> But when a Qos info is updated through the Qos extension, backed with
>>> the service plugin,
>>> the driver that implements the Qos plugin should send the new Qos
>>> enforcment to the Qos agent through the Qos topic.
>>>
>>
>> I reckon that is pretty much correct. At the end of the day, the agent
>> which enforces QoS at the data plane just needs to ensure the appropriate
>> configuration is in place on all ports. Whether this information is coming
>> from a driver or a serivice plugin, it does not matter a lot (as long as
>> it's not coming from an untrusted source, obviously). If you look at sec
>> group agent module, the concept is pretty much the same.
>>
>>
>>> So I feel like implementing a core resource extension with a service
>>> plugin needs :
>>> 1 : a MD to interact with the service plugin
>>> 2 : an agent and a mixin used by the the L2 agent.
>>> 3 : a dedicated topic used by the MD and the driver of the service
>>> plugin to communicate with the new agent
>>>
>>> Am I wrong?
>>>
>>
>> There is nothing wrong with that. Nevertheless, the fact that we need a
>> Mech driver _and_ a service plugin probably also implies that the service
>> plugin at the end of the day has not succeeded in its goal of being
>> orthogonal.
>> I think it's worth try and exploring solutions which will allow us to
>> completely decouple the service plugin for the core functionality, and
>> therefore completely contain QoS management within its service plugin. If
>> you too think this is not risible, I can perhaps put together something to
>> validate this idea.
>>
>>
>>
>>>
>>> [1]https://review.openstack.org/#/c/93329/
>>>
>>>
>>> > If we go
>>> > for the approach we've chosen so far the resource extension model you
>>> still
>>> > have to deal with ML2 extensions. But I like orthogonality in
>>> services, and
>>> > QoS is a service to me.
>>> > Another arguable point is that we might want to reconsider our
>>> > abuse^H^H^H^H^H use of resource attribute extension, but this is a
>>> story for
>>> > a different thread.
>>> >
>>> > Regarding the incubator request, I think we need to wait for the
>>> process to
>>> > be "blessed". But you have my support and I would happy to help to
>>> assist
>>> > with this work item through its process towards graduation.
>>> >
>>> > This obviously provided the QoS team wants me to do that!
>>> >
>>> > Salvatore
>>> >
>>> >
>>> > On 19 August 2014 23:15, Alan Kavanagh <alan.kavanagh at ericsson.com>
>>> wrote:
>>> >>
>>> >> +1, I am hoping this is just a short term holding point and this will
>>> >> eventually be merged into main branch as this is a feature a lot of
>>> >> companies, us included would definitely benefit from having supported
>>> and
>>> >> many thanks to Sean for sticking with this and continue to push this.
>>> >> /Alan
>>> >>
>>> >> -----Original Message-----
>>> >> From: Collins, Sean [mailto:Sean_Collins2 at cable.comcast.com]
>>> >> Sent: August-19-14 8:33 PM
>>> >> To: OpenStack Development Mailing List (not for usage questions)
>>> >> Subject: [openstack-dev] [Neutron][QoS] Request to be considered for
>>> >> neutron-incubator
>>> >>
>>> >> Hi,
>>> >>
>>> >> The QoS API extension has lived in Gerrit/been in review for about a
>>> year.
>>> >> It's gone through revisions, summit design sessions, and for a little
>>> while,
>>> >> a subteam.
>>> >>
>>> >> I would like to request incubation in the upcoming incubator, so that
>>> the
>>> >> code will have a more permanent "home" where we can collaborate and
>>> improve.
>>> >> --
>>> >> Sean M. Collins
>>> >> _______________________________________________
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev at lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >> _______________________________________________
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev at lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev at lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140820/f6586f3a/attachment.html>
More information about the OpenStack-dev
mailing list