[neutron] Switching the ML2 driver in-place from linuxbridge to OVN for an existing Cloud

Christian Rohmann christian.rohmann at inovex.de
Mon Aug 29 08:47:17 UTC 2022


Thanks Slawek for your quick response!


On 23/08/2022 07:47, Slawek Kaplonski wrote:
>> 1) Are the data models of the user managed resources abstract (enough)
>> from the ML2 used?
>> So would the composition of a router, a network, some subnets, a few
>> security group and a few instances in a project just result in a
>> different instantiation of packet handling components,
>> but be otherwise transparent to the user?
> Yes, data models are the same so all networks, routers, subnets will be the same but implemented differently by different backend.
> The only significant difference may be network types as OVN works mostly with Geneve tunnel networks and with LB backend You are using VXLAN IIUC your email.

That is reassuring. Yes we currently use VXLAN. But even with the same 
type of tunneling, I suppose the networks and their IDs will not align 
to form a proper layer 2 domain,
not even talking about all the other services like DHCP or metadata. See 
next question about my idea to at least have some gradual switchover.

>> 2) What could be possible migration strategies?
>>
>> [...] Or project by project by changing the network agents over
>> to nodes already running OVN?
> Even if You will keep vxlan networks with OVN backend (support is kind of limited really) You will not be able to have tunnels established between nodes with different backends so there will be no connectivity between VMs on hosts with different backends.

I was more thinking to move all of a projects resources to network nodes 
(and hypervisors) which already run OVN. So split the cloud in two 
classes of machines, one set unchanged running Linuxbridge and the other
in OVN mode. To migrate "a project" all agents of that projects routers 
and networks will be changed over to agents running on OVN-powered nodes....
So this would be a hard cut-over, but limited to a single project. In 
alternative to replacing all of the network agents on all nodes and for 
all projects at the same time.

Wouldn't that work  - in theory - or am I missing something obvious here?

>> Has anybody ever done something similar or heard about this being done
>> anywhere?
> I don't know about anyone who did that but if there is someone, I would be happy to hear about how it was done and how it went :)

We will certainly share our story - if we live to talk about it ;-)



Thanks again,
With kind regards


Christian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20220829/ef32fc0e/attachment-0001.htm>


More information about the openstack-discuss mailing list