<div dir="ltr"><div>Hi Luis,</div><div><br></div><div>I have redeploy my lab and i have following components </div><div><br></div><div>rack-1-host-1 - controller<br></div><div>rack-1-host-2 - compute1<br></div><div>rack-2-host-1 - compute2 <br></div><div><br></div><div><br></div><div># I am running ovn-bgp-agent on only two compute nodes compute1 and compute2 </div><div>[DEFAULT]<br>debug=False<br>expose_tenant_networks=True<br>driver=ovn_bgp_driver<br>reconcile_interval=120<br>ovsdb_connection=unix:/var/run/openvswitch/db.sock<br></div><div><br></div><div>### without any VM at present i can see only router gateway IP on rack1-host-2 </div><div><br></div><div>vagrant@rack-1-host-2:~$ ip a show ovn<br>37: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 0a:f7:6e:e0:19:69 brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://172.16.1.144/32">172.16.1.144/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet6 fe80::8f7:6eff:fee0:1969/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>vagrant@rack-2-host-1:~$ ip a show ovn<br>15: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 56:61:6b:29:ac:29 brd ff:ff:ff:ff:ff:ff<br> inet6 fe80::5461:6bff:fe29:ac29/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>### Lets create vm1 which is endup on rack1-host-2 but it didn't expose vm1 ip (tenant ip) same with rack-2-host-1</div><div><br></div><div>vagrant@rack-1-host-2:~$ ip a show ovn<br>37: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 0a:f7:6e:e0:19:69 brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://172.16.1.144/32">172.16.1.144/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet6 fe80::8f7:6eff:fee0:1969/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div>vagrant@rack-2-host-1:~$ ip a show ovn<br>15: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 56:61:6b:29:ac:29 brd ff:ff:ff:ff:ff:ff<br> inet6 fe80::5461:6bff:fe29:ac29/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>### Lets attach a floating ip to vm1 and see. now i can see 10.0.0.17 vm1 ip got expose on rack-1-host-2 same time nothing on rack-2-host-1 ( ofc because no vm running on it)</div><div><br></div><div>vagrant@rack-1-host-2:~$ ip a show ovn<br>37: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 0a:f7:6e:e0:19:69 brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://172.16.1.144/32">172.16.1.144/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet <a href="http://10.0.0.17/32">10.0.0.17/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet <a href="http://172.16.1.148/32">172.16.1.148/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet6 fe80::8f7:6eff:fee0:1969/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>vagrant@rack-2-host-1:~$ ip a show ovn<br>15: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 56:61:6b:29:ac:29 brd ff:ff:ff:ff:ff:ff<br> inet6 fe80::5461:6bff:fe29:ac29/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>#### Lets spin up vm2 which should end up on other compute node which is rack-2-host-1 ( no change yet.. vm2 ip wasn't exposed anywhere yet. )</div><div><br></div><div>vagrant@rack-1-host-2:~$ ip a show ovn<br>37: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 0a:f7:6e:e0:19:69 brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://172.16.1.144/32">172.16.1.144/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet <a href="http://10.0.0.17/32">10.0.0.17/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet <a href="http://172.16.1.148/32">172.16.1.148/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet6 fe80::8f7:6eff:fee0:1969/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>vagrant@rack-2-host-1:~$ ip a show ovn<br>15: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 56:61:6b:29:ac:29 brd ff:ff:ff:ff:ff:ff<br> inet6 fe80::5461:6bff:fe29:ac29/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>#### Lets again attach floating ip to vm2 ( so far nothing changed, technically it should expose IP on rack-1-host-2 )</div><div><br></div><div>vagrant@rack-1-host-2:~$ ip a show ovn<br>37: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 0a:f7:6e:e0:19:69 brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://172.16.1.144/32">172.16.1.144/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet <a href="http://10.0.0.17/32">10.0.0.17/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet <a href="http://172.16.1.148/32">172.16.1.148/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet6 fe80::8f7:6eff:fee0:1969/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>vagrant@rack-2-host-1:~$ ip a show ovn<br>15: ovn: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovn-bgp-vrf state UNKNOWN group default qlen 1000<br> link/ether 56:61:6b:29:ac:29 brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://172.16.1.143/32">172.16.1.143/32</a> scope global ovn<br> valid_lft forever preferred_lft forever<br> inet6 fe80::5461:6bff:fe29:ac29/64 scope link<br> valid_lft forever preferred_lft forever<br></div><div><br></div><div><br></div><div>Here is the logs - <a href="https://paste.opendev.org/show/bRThivJE4wvEN92DXJUo/">https://paste.opendev.org/show/bRThivJE4wvEN92DXJUo/</a> </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Aug 25, 2022 at 6:25 AM Luis Tomas Bolivar <<a href="mailto:ltomasbo@redhat.com">ltomasbo@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Aug 25, 2022 at 11:31 AM Satish Patel <<a href="mailto:satish.txt@gmail.com" target="_blank">satish.txt@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Hi Luis,<div><br></div><div>Very interesting, you are saying it will only expose tenant ip on gateway port node? Even we have DVR setup in cluster correct? </div></div></blockquote><div><br></div><div>Almost. The path is the same as in a DVR setup without BGP (with the difference you can reach the internal IP). In a DVR setup, when the VM is in a tenant network, without a FIP, the traffic goes out through the cr-lrp (ovn router gateway port), i.e., the node hosting that port which is connecting the router where the subnet where the VM is to the provider network.</div><div><br></div><div>Note this is a limitation due to how ovn is used in openstack neutron, where traffic needs to be injected into OVN overlay in the node holding the cr-lrp. We are investigating possible ways to overcome this limitation and expose the IP right away in the node hosting the VM.<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br></div><div>Does gateway node going to expose ip for all other compute nodes? <br></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br></div><div>What if I have multiple gateway node? </div></div></blockquote><div><br></div><div>No, each router connected to the provider network will have its own ovn router gateway port, and that can be allocated in any node which has "enable-chassis-as-gw". What is true is that all VMs in a tenant networks connected to the same router, will be exposed in the same location .</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br></div><div>Did you configure that flag on all node or just gateway node? </div></div></blockquote><div><br></div><div>I usually deploy with 3 controllers which are also my "networker" nodes, so those are the ones having the enable-chassis-as-gw flag.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><div dir="ltr">Sent from my iPhone</div><div dir="ltr"><br><blockquote type="cite">On Aug 25, 2022, at 4:14 AM, Luis Tomas Bolivar <<a href="mailto:ltomasbo@redhat.com" target="_blank">ltomasbo@redhat.com</a>> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="ltr"><div>I tested it locally and it is exposing the IP properly in the node where the ovn router gateway port is allocated. Could you double check if that is the case in your setup too?<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 24, 2022 at 8:58 AM Luis Tomas Bolivar <<a href="mailto:ltomasbo@redhat.com" target="_blank">ltomasbo@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Aug 23, 2022 at 6:04 PM Satish Patel <<a href="mailto:satish.txt@gmail.com" target="_blank">satish.txt@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Folks,<br><div><br></div><div>I am setting up ovn-bgp-agent lab in "BGP mode" and i found everything working great except expose tenant network <a href="https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/" target="_blank">https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/</a> </div><div><br></div><div>Lab Summary:</div><div><br></div><div>1 controller node </div><div>3 compute node</div><div><br></div><div>ovn-bgp-agent running on all compute node because i am using "enable_distributed_floating_ip=True" <br></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>ovn-bgp-agent config:</div><div><br></div><div>[DEFAULT]<br>debug=False<br>expose_tenant_networks=True<br>driver=ovn_bgp_driver<br>reconcile_interval=120<br>ovsdb_connection=unix:/var/run/openvswitch/db.sock<br></div><div><br></div><div>I am not seeing my vm on tenant ip getting exposed but when i attach FIP which gets exposed in loopback address. here is the full trace of debug logs: <a href="https://paste.opendev.org/show/buHiJ90nFgC1JkQxZwVk/" target="_blank">https://paste.opendev.org/show/buHiJ90nFgC1JkQxZwVk/</a> </div></div></blockquote><div><br></div><div>It is not exposed in any node, right? Note when expose_tenant_network is enabled, the traffic to the tenant VM is exposed in the node holding the cr-lrp (ovn router gateway port) for the router connecting the tenant network to the provider one.</div><div><br></div><div>The FIP will be exposed in the node where the VM is.</div><div><br></div><div>On the other hand, the error you see there should not happen, so I'll investigate why that is and also double check if the expose_tenant_network flag is broken somehow. <br></div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Thanks!<br></div></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div>LUIS TOMÁS BOLÍVAR<br>Principal Software Engineer<br>Red Hat<br>Madrid, Spain<br><a href="mailto:ltomasbo@redhat.com" target="_blank">ltomasbo@redhat.com</a> <br> <br></div></div></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div>LUIS TOMÁS BOLÍVAR<br>Principal Software Engineer<br>Red Hat<br>Madrid, Spain<br><a href="mailto:ltomasbo@redhat.com" target="_blank">ltomasbo@redhat.com</a> <br> <br></div></div></div></div>
</div></blockquote></div></div></blockquote></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div>LUIS TOMÁS BOLÍVAR<br>Principal Software Engineer<br>Red Hat<br>Madrid, Spain<br><a href="mailto:ltomasbo@redhat.com" target="_blank">ltomasbo@redhat.com</a> <br> <br></div></div></div></div>
</blockquote></div>