[neutron] OVN and dynamic routing

Francois rigault.francois at gmail.com
Wed Oct 13 08:03:51 UTC 2021


...forgot to add the mailing list in the reply

On Wed, 13 Oct 2021 at 10:01, Francois <rigault.francois at gmail.com> wrote:
>
> On Tue, 12 Oct 2021 at 21:46, Dan Sneddon <dsneddon at redhat.com> wrote:
> >
> > On 10/12/21 07:03, Francois wrote:
> > > Hello Neutron!
> > > I am looking into running stacks with OVN on a leaf-spine network, and
> > > have some floating IPs routed between racks.
> > >
> > > Basically each rack is assigned its own set of subnets.
> > > Some VLANs are stretched across all racks: the provisioning VLAN used
> > > by tripleo to deploy the stack, and the VLANs for the controllers API
> > > IPs. However, each tenant subnet is local to a rack: for example each
> > > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its
> > > own rack. Traffic between 2 racks is sent to a spine, and leaves and
> > > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic
> > > is encapsulated as VXLAN, and routes between vteps are exchanged with
> > > BGP.
> > >
> >
> > There has been a lot of work put into TripleO to allow you to provision
> > hosts across L3 boundaries using DHCP relay. You can create a routed
> > provisioning network using "helper-address" or vendor-specific commands
> > on your top-of-rack switches, and a different subnet and DHCP address
> > pool per rack.
>
Yes I saw that in the doc. I was not planning on using this for
reasons I mentioned in another reply (this provisioning network is
""useless most of the time"" since there is almost no provisioning
happening :D ) If any, I would love to work on Ironic DHCP-less
deployments which was almost working last time I tried and I saw
Ironic team contributing fixes since then.
>
> >
> > > I am looking into supporting floating IPs in there: I expect floating
> > > IPs to be able to move between racks, as such I am looking into
> > > publishing the route for a FIP towards an hypervisor, through BGP.
> > > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop.
> >
> > This is becoming a very common architecture, and that is why there are
> > several projects working to achieve the same goal with slightly
> > different implementations.
> >
> > >
> > > It seems there are several ideas to achieve this (it was discussed
> > > [before][1] in ovs conference)
> > > - using [neutron-dynamic-routing][2] - that seems to have some gaps
> > > for OVN. It uses os-ken to talk to switches and exchange routes
> > > - using [OVN BGP agent][3] that uses FRR, it seems there is a related
> > > [RFE][4] for integration in tripleo
> > >
> > > There is btw also a [BGPVPN][5] project (it does not match my usecase
> > > as far as I tried to understand it) that also has some code that talks
> > > BGP to switches, already integrated in tripleo.
> > >
> > > For my tests, I was able to use the neutron-dynamic-routing project
> > > (almost) as documented, with a few changes:
> > > - for traffic going from VMs to outside the stack, the hypervisor was
> > > trying to resolve the "gateway of fIPs" with ARP request which does
> > > not make any sense. I created a dummy port with the mac address of the
> > > virtual router of the switches:
> > > ```
> > > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml
> > > - Fixed IP Addresses:
> > > - ip_address: 10.64.254.1
> > > subnet_id: 8f37
> > > ID: 4028
> > > MAC Address: 00:1c:73:00:00:11
> > > Name: lagw
> > > Status: DOWN
> > > ```
> > > this prevent the hypervisor to send ARP requests to a non existent gateway
> > > - for traffic coming back, we start the neutron-bgp-dragent agent on
> > > the controllers. We create the right bgp speaker, peers, etc.
> > > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it
> > > selects fips and join with ports owned by a "floatingip_agent_gateway"
> > > which does not exist on OVN. We can define ourselves some ports so
> > > that the dragent is able to find the tenant IP of a host:
> > > ```
> > > openstack port create --network provider --device-owner
> > > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip
> > > ip-address=10.64.245.102 ag2
> > > ```
> > > - when creating a floating IP and assigning a port to it, Neutron
> > > reads changes from OVN SB and fills the binding information into the
> > > port:
> > > ```
> > > $ openstack port show -c binding_host_id `openstack floating ip show
> > > 10.64.254.177 -f value -c port_id`
> > > +-----------------+----------------------------------------+
> > > | Field | Value |
> > > +-----------------+----------------------------------------+
> > > | binding_host_id | cpu35d.cloud |
> > > +-----------------+----------------------------------------+
> > > ```
> > > this allows the dragent to publish the route for the fip
> > > ```
> > > $ openstack bgp speaker list advertised routes bgpspeaker
> > > +------------------+---------------+
> > > | Destination | Nexthop |
> > > +------------------+---------------+
> > > | 10.64.254.177/32 | 10.64.245.102 |
> > > +------------------+---------------+
> > > ```
> > > - traffic reaches the hypervisor but (for reason I don't understand) I
> > > had to add a rule
> > > ```
> > > $ ip rule
> > > 0: from all lookup local
> > > 32765: from all iif vlan1234 lookup ovn
> > > 32766: from all lookup main
> > > 32767: from all lookup default
> > > $ ip route show table ovn
> > > 10.64.254.177 dev vlan1234 scope link
> > > ```
> > > so that the traffic coming for the fip is not immediately discarded by
> > > the hypervisor (it's not an ideal solution but it is a workaround that
> > > makes my one fIP work!)
> > >
> > > So all in all it seems it would be possible to use the
> > > neutron-dynamic-routing agent, with some minor modifications (eg: to
> > > also publish the fip of the OVN L3 gateway router).
> > >
> > > I am wondering whether I have overlooked anything, and if such kind of
> > > deployment (OVN + neutron dynamic routing or similar) is already in
> > > use somewhere. Does it make sense to have a RFE for better integration
> > > between OVN and neutron-dynamic-routing?
> >
> > I have been helping to contribute to integrating FRR with OVN in order
> > to advertise FIPs and provider network IPs into BGP. The OVN BGP Agent
> > is very new, and I'm pretty sure that nobody is using it in production
> > yet. However the initial implementation is fairly simple and hopefully
> > it will mature quickly.
> >
> > As you discovered, the solution that uses neutron-bgp-dragent and os-ken
> > is not compatible with OVN
>
Pretty much the contrary, it basically worked. There are a few
differences but the gap seems very tiny (unless I overlooked something
and I'm fundamentally wrong) I don't understand why a new project
would be needed to make it work for OVN.
>
> >, that is why ovs-bgp-agent is being
> > developed. You should be able to try the ovs-bgp-agent with FRR and
> > properly configured routing switches, it functions for the basic use case.
> >
> > The OVN BGP Agent will ensure that FIP and provider network IPs are
> > present in the kernel as a /32 or /128 host route, which is then
> > advertised into the BGP fabric using the FRR BGP daemon. If the default
> > route is received from BGP it will be installed into the kernel by the
> > FRR zebra daemon which syncs kernel routes with the FRR BGP routing
> > table. The OVN BGP agent installs flows for the Neutron network gateways
> > that hand off traffic to the kernel for routing. Since the kernel
> > routing table is used, the agent isn't compatible with DPDK fast
> > datapath yet.
> >
> > We don't have good documentation for the OVN BGP integration yet. I've
> > only recently been able to make it my primary priority, and some of the
> > other engineers which have done the initial proof of concept are moving
> > on to other projects. There will be some discussions at the upcoming
> > OpenStack PTG about this work, but I am hopeful that the missing pieces
> > for your use case will come about in the Yoga cycle.
>
I did not try to run the OVN BGP agent but I saw your blog posts and I
think it's enough to get started with.
I still don't get why an extra OVN BGP agent would be needed. One
thing I was wondering from the blog posts (and your reply here) is
whether every single compute would need connectivity to the physical
switches to publish the routes - as the dragent runs on the controller
node you only need to configure connectivity between the controllers
and the physical switches while in the FRR case you need to open much
more.
>
Would the developments in the Yoga cycle be focused on the OVN BGP
agent only, and so there is no interest in improving the
neutron-dynamic-routing project ?
>
Thanks for your insightful comments :)
>
> >
> > >
> > > Thanks
> > > Francois



More information about the openstack-discuss mailing list