[neutron][OVN] Multiple mechanism drivers

Sean Mooney smooney at redhat.com
Thu Nov 28 12:27:40 UTC 2019


On Thu, 2019-11-28 at 11:12 +0900, Takashi Yamamoto wrote:
> hi,
> 
> On Mon, Nov 25, 2019 at 5:00 PM Slawek Kaplonski <skaplons at redhat.com> wrote:
> > 
> > Hi,
> > 
> > I think that this may be true that networking-ovn will not work properly
> > with other drivers.
> > I don't think it was tested at any time.
it should work with other direver if you use vlan or flat networks.
it will not form mesh tunnel networks with other drivers event if you use geneve for the
other ml2 driver.
> > Also the problem may be that when You are using networking-ovn than whole
> > neutron topology is different. There are different agents for example.
> > 
> > Please open a bug for that for networking-ovn. I think that networking-ovn team
> > will take a look into that.
> 
> networking-midonet ignores networks without "midonet" type segments to
> avoid interfering other mechanism drivers.
> maybe networking-ovn can have something similar.
that is actully the opiste of how that shoudl work.
you are ment to be able to have multiple ml2 drivers share the same segmentation type
and you are not ment to have a segmentation type that is specific to a mech driver.
give we dont scheduler based on the segmenation type supprot today either (we shoudl by the way)
it would be very fagile to use a dedicated ovn segmentation type and i woudl not advise doing it for
midio net.

ideally we would create  placement aggregate or triats to track which segmentation types
are supported by which hosts. traits are proably better for the segmentation types but modelling network segments
them selves would be better as aggreates.

if we really wanted to model the capsity of the segmenation types we woudl addtionally create shareing resouce providers
with inventories of network segmenation types resouce classes per physnet with a singel gloabl rp for the tunneled
types. then every time you allocated a network in neuton you would create an allocation for that network and tag ports
with the approreate aggreate requrest.

on the nova side wew could combine the segment and segmenation type aggreate request form the port with any
other aggreates form nova and pass all of them as member_of requriements to placment to ensure we land on a
host that can provide the required network connectivty. today we litrallly just assme all nodes are connected
to all networks with all segmenation types and hope for the best.

thats a bit of a tangent but just pointing out we should schduler on network connectivity and segmenation types
but we shoudl not have backend specific segmenation types. 

> 
> wrt agents, last time i checked there was no problem with running
> midonet agent and ovs agent on the same host, sharing the kernel
> datapath.
> so i guess there's no problem with ovn either.
you can run ml2/ovn and ml2/ovs on the same cloud.
just put the ml2/ovs first. it will fail to bind if a host dose not have
the ovs neutron agent running and will then bind with ml2/ovn instead.

it might work the other way too but i have nto tested that.
> 
> wrt l3, unfortunately neither midonet or ovn have implemented "l3
> flavor" thing yet. so you have to choose a single l3 plugin.
> iirc, Sam's deployment doesn't use l3 for linuxbridge, right?
if you have dedicated network nodes that is not really a proable.
just make sure that they are all ovn or all ovs or whatever makes sense.
its the same way that if you deploy with ml2/ovs and want to use ovs-dpdk that
you only instlal ovs-dpdk on the comptue nodes and use kernel ovs on the networking nodes
to avoid the terible network performace when using network namespace for nat and routing.

if you have tunneled networks it would be an issue butin that case you just need to ensure that at least 1 router
is created on each plugin so you would use ha routers by default and set the ha factor so that it created routers on
node with both mechinmum dirvers. again however since the different ml2/driver do not form a mesh you should
really only use different ml2 drivers if you are using vlan or flat networks.
> 
> > 
> > On Mon, Nov 25, 2019 at 04:32:50PM +1100, Sam Morrison wrote:
> > > We are looking at using OVN and are having some issues with it in our ML2 environment.
> > > 
> > > We currently have 2 mechanism drivers in use: linuxbridge and midonet and these work well (midonet is the default
> > > tenant network driver for when users create a network)
> > > 
> > > Adding OVN as a third mechanism driver causes the linuxbridge and midonet networks to stop working in terms of
> > > CRUD operations etc.
i would try adding ovn last so it is only used  if the other two cannot bind the port.
the mech driver list is orderd for this reason so you can express preference.
> > > It looks as if the OVN driver thinks it’s the only player and is trying to do things on ports that are in
> > > linuxbridge or midonet networks.
that would be a bug if so.
> > > 
> > > Am I missing something here? (We’re using Stein version)
> > > 
> > > 
> > > Thanks,
> > > Sam
> > > 
> > > 
> > > 
> > 
> > --
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> > 
> > 
> 
> 




More information about the openstack-discuss mailing list