<div dir="ltr">Daniel, thank you very much for the extensive and detailed email. <div><br></div><div>The plan looks good to me and it makes sense, also the OVS option will still be </div><div>tested, and available when selected.</div><div><div><br></div><div><br></div><div><br><div class="gmail_quote"><div dir="ltr">On Wed, Oct 24, 2018 at 4:41 PM Daniel Alvarez Sanchez <<a href="mailto:dalvarez@redhat.com">dalvarez@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Stackers!<br>
<br>
The purpose of this email is to share with the community the intention<br>
of switching the default network backend in TripleO from ML2/OVS to<br>
ML2/OVN by changing the mechanism driver from openvswitch to ovn. This<br>
doesn’t mean that ML2/OVS will be dropped but users deploying<br>
OpenStack without explicitly specifying a network driver will get<br>
ML2/OVN by default.<br>
<br>
OVN in Short<br>
==========<br>
<br>
Open Virtual Network is managed under the OVS project, and was created<br>
by the original authors of OVS. It is an attempt to re-do the ML2/OVS<br>
control plane, using lessons learned throughout the years. It is<br>
intended to be used in projects such as OpenStack and Kubernetes. </blockquote><div><br></div><div>Also oVirt / RHEV.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">OVN<br>
has a different architecture, moving us away from Python agents<br>
communicating with the Neutron API service via RabbitMQ to daemons<br>
written in C communicating via OpenFlow and OVSDB.<br>
<br>
OVN is built with a modern architecture that offers better foundations<br>
for a simpler and more performant solution. What does this mean? For<br>
example, at Red Hat we executed some preliminary testing during the<br>
Queens cycle and found significant CPU savings due to OVN not using<br>
RabbitMQ (CPU utilization during a Rally scenario using ML2/OVS [0] or<br>
ML2/OVN [1]). Also, we tested API performance and found out that most<br>
of the operations are significantly faster with ML2/OVN. Please see<br>
more details in the FAQ section.<br>
<br>
Here’s a few useful links about OpenStack’s integration of OVN:<br>
<br>
* OpenStack Boston Summit talk on OVN [2]<br>
* OpenStack networking-ovn documentation [3]<br>
* OpenStack networking-ovn code repository [4]<br>
<br>
How?<br>
====<br>
<br>
The goal is to merge this patch [5] during the Stein cycle which<br>
pursues the following actions:<br>
<br>
1. Switch the default mechanism driver from openvswitch to ovn.<br>
2. Adapt all jobs so that they use ML2/OVN as the network backend.<br>
3. Create legacy environment file for ML2/OVS to allow deployments based on it.<br>
4. Flip scenario007 job from ML2/OVN to ML2/OVS so that we continue testing it.<br>
5. Continue using ML2/OVS in the undercloud.<br>
6. Ensure that updates/upgrades from ML2/OVS don’t break and don’t<br>
switch automatically to the new default. As some parity gaps exist<br>
right now, we don’t want to change the network backend automatically.<br>
Instead, if the user wants to migrate from ML2/OVS to ML2/OVN, we’ll<br>
provide an ansible based tool that will perform the operation.<br>
More info and code at [6].<br>
<br>
Reviews, comments and suggestions are really appreciated :)<br>
<br>
<br>
FAQ<br>
===<br>
<br>
Can you talk about the advantages of OVN over ML2/OVS?<br>
-------------------------------------------------------------------------------<br>
<br>
If asked to describe the ML2/OVS control plane (OVS, L3, DHCP and<br>
metadata agents using the messaging bus to sync with the Neutron API<br>
service) one would not tend to use the term ‘simple’. There is liberal<br>
use of a smattering of Linux networking technologies such as:<br>
* iptables<br>
* network namespaces<br>
* ARP manipulation<br>
* Different forms of NAT<br>
* keepalived, radvd, haproxy, dnsmasq<br>
* Source based routing,<br>
* … and of course OVS flows.<br>
<br>
OVN simplifies this to a single process running on compute nodes, and<br>
another process running on centralized nodes, communicating via OVSDB<br>
and OpenFlow, ultimately setting OVS flows.<br>
<br>
The simplified, new architecture allows us to re-do features like DVR<br>
and L3 HA in more efficient and elegant ways. For example, L3 HA<br>
failover is faster: It doesn’t use keepalived, rather OVN monitors<br>
neighbor tunnel endpoints. OVN supports enabling both DVR and L3 HA<br>
simultaneously, something we never supported with ML2/OVS.<br>
<br>
We also found out that not depending on RPC messages for agents<br>
communication brings a lot of benefits. From our experience, RabbitMQ<br>
sometimes represents a bottleneck and it can be very intense when it<br>
comes to resources utilization.<br>
<br>
<br>
What about the undercloud?<br>
--------------------------------------<br>
<br>
ML2/OVS will be still used in the undercloud as OVN has some<br>
limitations with regards to baremetal provisioning mainly (keep<br>
reading about the parity gaps). We aim to convert the undercloud to<br>
ML2/OVN to provide the operator a more consistent experience as soon<br>
as possible.<br>
<br>
It would be possible however to use the Neutron DHCP agent in the<br>
short term to solve this limitation but in the long term we intend to<br>
implement support for baremetal provisioning in the OVN built-in DHCP<br>
server.<br>
<br>
<br>
What about CI?<br>
---------------------<br>
<br>
* networking-ovn has:<br>
* Devstack based Tempest (API, scenario from Tempest and Neutron<br>
Tempest plugin) against the latest released OVS version, and against<br>
OVS master (thus also OVN master)<br>
* Devstack based Rally<br>
* Grenade<br>
* A multinode, container based TripleO job that installs and issues a<br>
basic VM connectivity scenario test<br>
* Supports Python 3 and 2<br>
* TripleO has currently OVN enabled in one quickstart featureset (fs30).<br>
<br>
Are there any known parity issues with ML2/OVS?<br>
-------------------------------------------------------------------<br>
<br>
* OVN supports VLAN provider networks, but not VLAN tenant networks.<br>
This will be addressed and is being tracked in RHBZ 1561880 [7]<br>
* SRIOV: A limitation exists for this scenario where OVN needs to<br>
support VLAN tenant networks and Neutron DHCP Agent has to be<br>
deployed. The goal is to include support in OVN to get rid of Neutron<br>
DHCP agent. [8]<br>
* QoS: Lack of support for DSCP marking and egress bandwidth limiting<br>
RHBZ 1503494 [9]<br>
* OVN does not presently support the new Security Groups logging API<br>
RHBZ 1619266 [10]<br>
* OVN does not correctly support Jumbo frames for North/South traffic<br>
RHBZ 1547074 [11]<br>
* OVN built-in DHCP server currently can not be used to provision<br>
baremetal nodes (RHBZ 1622154 [12]) (this affects the undercloud and<br>
overcloud’s baremetal-to-tenant use case).<br>
* End-to-end encryption support in TripleO (RHBZ 1601926 [13])<br>
<br>
More info at [14].<br>
<br>
<br>
How does the performance look like?<br>
-------------------------------------------------<br>
<br>
We have carried out different performance tests. Overall, ML2/OVN<br>
outperforms ML2/OVS in most of the operations as this graph [15]<br>
shows.<br>
Only creating networks and listing ports are slower which is mostly<br>
due to the fact that ML2/OVN creates an extra port (for metadata) upon<br>
network creation so the amount of ports listed for the same rally task<br>
is 2x for the ML2/OVN case.<br>
<br>
Also, the resources utilization is lower in ML2/OVN [16] vs ML2/OVS<br>
[17] mainly due to the lack of agents and not using RPC.<br>
<br>
OVN only supports VLAN and Geneve (tunneled) networks, while ML2/OVS<br>
uses VXLAN. What, if any, is the impact? What about hardware offload?<br>
-----------------------------------------------------------------------------------------------------<br>
<br>
Good question! We asked this ourselves, and research showed that this<br>
is not a problem. Normally, NICs that support VXLAN also support<br>
Geneve hardware offload. Interestingly, even in the cases where they<br>
don’t, performance was found to be better using Geneve due to other<br>
optimizations that Geneve benefits from. More information can be found<br>
in Russell’s Bryant blog [18], who did extensive work in this space.<br>
<br>
<br>
Links<br>
====<br>
<br>
[0] <a href="https://imgur.com/a/oOmuAqj" rel="noreferrer" target="_blank">https://imgur.com/a/oOmuAqj</a><br>
[1] <a href="https://imgur.com/a/N9jrIXV" rel="noreferrer" target="_blank">https://imgur.com/a/N9jrIXV</a><br>
[2] <a href="https://www.youtube.com/watch?v=sgc7myiX6ts" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=sgc7myiX6ts</a><br>
[3] <a href="https://docs.openstack.org/networking-ovn/queens/admin/index.html" rel="noreferrer" target="_blank">https://docs.openstack.org/networking-ovn/queens/admin/index.html</a><br>
[4] <a href="https://github.com/openstack/networking-ovn" rel="noreferrer" target="_blank">https://github.com/openstack/networking-ovn</a><br>
[5] <a href="https://review.openstack.org/#/c/593056/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/593056/</a><br>
[6] <a href="https://github.com/openstack/networking-ovn/tree/master/migration" rel="noreferrer" target="_blank">https://github.com/openstack/networking-ovn/tree/master/migration</a><br>
[7] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1561880" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1561880</a><br>
[8] <a href="https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html" rel="noreferrer" target="_blank">https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html</a><br>
[9] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1503494" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1503494</a><br>
[10] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=</a> 1619266<br>
[11] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=</a> 1547074<br>
[12] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=</a> 1622154<br>
[13] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=</a> 1601926<br>
[14] <a href="https://wiki.openstack.org/wiki/Networking-ovn" rel="noreferrer" target="_blank">https://wiki.openstack.org/wiki/Networking-ovn</a><br>
[15] <a href="https://imgur.com/a/4QtaN6b" rel="noreferrer" target="_blank">https://imgur.com/a/4QtaN6b</a><br>
[16] <a href="https://imgur.com/a/N9jrIXV" rel="noreferrer" target="_blank">https://imgur.com/a/N9jrIXV</a><br>
[17] <a href="https://imgur.com/a/oOmuAqj" rel="noreferrer" target="_blank">https://imgur.com/a/oOmuAqj</a><br>
[18] <a href="https://blog.russellbryant.net/2017/05/30/ovn-geneve-vs-vxlan-does-it-matter/" rel="noreferrer" target="_blank">https://blog.russellbryant.net/2017/05/30/ovn-geneve-vs-vxlan-does-it-matter/</a><br>
<br>
<br>
Thanks!<br>
Daniel Alvarez<br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Miguel Ángel Ajo<br></div><div>OSP / Networking DFG, OVN Squad Engineering</div></div></div></div></div></div>