[openstack-dev] [neutron][QoS] service-plugin or not discussion

Miguel Angel Ajo Pelayo mangelajo at redhat.com
Fri Apr 24 08:47:01 UTC 2015


Hi Armando & Salvatore,

> On 23/4/2015, at 9:30, Salvatore Orlando <sorlando at nicira.com> wrote:
> 
> 
> 
> On 23 April 2015 at 01:30, Armando M. <armamig at gmail.com <mailto:armamig at gmail.com>> wrote:
> 
> On 22 April 2015 at 06:02, Miguel Angel Ajo Pelayo <mangelajo at redhat.com <mailto:mangelajo at redhat.com>> wrote:
> 
> Hi everybody,
> 
>    In the latest QoS meeting, one of the topics was a discussion about how to implement
> QoS [1] either as in core, or as a service plugin, in, or out-tree.
> 
> It is really promising that after only two meetings the team is already split! I cannot wait for the API discussion to start ;)

We seem to be relatively on the same page about how to model the API, but we need yet to loop
in users/operators who have an interest in QoS to make sure they find it usable. [1]

>  
> 
> My apologies if I was unable to join, the meeting clashed with another one I was supposed to attend.

My bad, sorry ;-/

>  
> 
>    It’s my feeling, and Mathieu’s that it looks more like a core feature, as we’re talking of
> port properties that we define at high level, and most plugins (QoS capable) may want
> to implement at dataplane/controlplane level, and also that it’s something requiring a good
> amount of review.
> 
> "Core" is a term which is recently being abused in Neutron... However, I think you mean that it is a feature fairly entangled with the L2 mechanisms,

Not only the L2 mechanisms, but the description of ports themselves, in the basic cases we’re just defining
how “small” or “big” your port is.  In the future we could be saying “UDP ports 5000-6000” have the highest
priority on this port, or a minimum bandwidth of 50Mbps…, it’s marked with a IPv6 flow label for hi-prio…
or whatever policy we support.

> that deserves being integrated in what is today the "core" plugin and in the OVS/LB agents. To this aim I think it's good to make a distinction between the management plane and the control plane implementation.
> 
> At the management plane you have a few choices:
> - yet another mixin, so that any plugin can add it and quickly support the API extension at the mgmt layer. I believe we're fairly certain everybody understands mixins are not sustainable anymore and I'm hopeful you are not considering this route.

Are you specifically referring to this on every plugin? 

class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, <---
                dvr_mac_db.DVRDbMixin, <---
                external_net_db.External_net_db_mixin, <---
                sg_db_rpc.SecurityGroupServerRpcMixin,   <---
                agentschedulers_db.DhcpAgentSchedulerDbMixin,  <---
                addr_pair_db.AllowedAddressPairsMixin,  <----

I’m quite allergic to mixings, I must admit, but, if it’s not the desired way, why don’t we refactor the way we compose plugins !? (yet more refactors probably would slow us down, …) but… I feel like we’re pushing to overcomplicate the design for a case which is similar to everything else we had before (security groups, port security, allowed address pairs).

It feels wrong to have every similar feature done in a different way, even if the current way is not the best one I admit.

> - a service plugin - as suggested by some proposers. The service plugin is fairly easy to implement, and now Armando has provided you with a mechanism to register for callbacks for events in other plugins. This should make the implementation fairly straightforward. This also enables other plugins to implement QoS support.
> - a ML2 mechanism driver + a ML2 extension driver. From an architectural perspective this would be the preferred solution for a ML2 implementation, but at the same time will not provide management level support for non-ML2 plugins.

I’m a bit lost of why a a plugin (apart from ML2) could not just declare that it’s implementing the extension,  or it’s just that the only way we have to do it right now it’s mixings? why would ML2 avoid it?.


>  
> 
> 
>    In the other hand Irena and Sean were more concerned about having a good separation
> of concerns (I agree actually with that part), and being able to do quicker iterations on a
> separate stackforge repo.
> 
> Perhaps we're trying to address the issue at the wrong time. Once a reasonable agreement has been reached on the data model, and the API, whether we're going with a service plugin or core etc should be an implementation detail. I think the crux of the matter is the data plane integration. From a management and control standpoint it should be fairly trivial to expose/implement the API and business logic via a service plugin and, and some of you suggested, integrate with the core via callbacks.

We have an update to the data model to be reviewed if you can have an eye on the last update [1], I’ll correct based on the last comments we received during this week.
I’m going to share it broadly on the mail list + operator list, probably with a brief description of how the api / cmdline would look like so we can get feed back.

I guess, like as for 1 cycle the extension would be experimental, it’s ok to say that for L we would have the first version, and for M we would add some extra optional
parameters and fields in the data model? (I’m here talking about protocol field classification a’la security groups…)

> 
> However, I am pretty sure there will be preliminary work necessary to integrate the server with the agent fabric (when there is one) so that is no longer a pain. Extending what the agent can do the way we did so far (e.g. by adding extra payloads/messages, mixin etc) is not sustainable, and incredibly brittle.
> 
> In my opinion the interesting part for an architectural decision here is the control plane support for the reference implementation.
> Adding more stuff to the OVS/LB agents might lead to an increase in technical debt. On the other hand, adding a new QoS agent might lead to further complexity -

I’d quite try to avoid a separate agent, we have enough moving parts already, and enough scalability issues to introduce another extra pain point. Since, at least for the OVS case, it would require flow rules and port modification on the bridges, such thing will require a very tight coordination with the current agent, I don’t mean it can’t be done, I’ll invest some time thinking about it.

> another loose bit to keep in sync with the rest, and operators usually are not happy about having to manage the lifecycle of another independent component.

Operators, people writing software to deploy & configure, etc...

> And as Armando say, you also need to consider what changes you need to the RPC interface. 

Yes, with an eye on scalability implications.

> 
> Without that information it is hard to make a call, and therefore I agree with Armando that there are not yet enough elements to make a decision - let's wait at least for a high level view of system architecture.
> 
> 
> 
>    Since we didn’t seem to find an agreement, and I’m probably missing some details,
> I’d like to loop in our core developers and PTL to provide an opinion on this.
> 
> Core developers and the PTL do not necessarily have a better opinion... instead in many cases they have a worse one!
> By the way, if you go the stackforge route, then you can apply for becoming an openstack project and one of you can become PTL! Isn't that wonderful? Who doesn't want to be PTL these days?
> 

Well, at least most of you have been around for a long time and know better about neutron and it’s evolution, you also have the chance spend more time on reviews so you generally have a broader vision of neutron itself, in the end this is why cores are supposed to be cores :-)

>  
> 
> 
> [1] http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-04-21-14.03.log.html#l-192 <http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-04-21-14.03.log.html#l-192>
> 
> 

[1] https://review.openstack.org/#/c/88599/ <https://review.openstack.org/#/c/88599/>

Cheers,
Miguel Angel Ajo



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150424/4d990dee/attachment.html>


More information about the OpenStack-dev mailing list