Masquerading VM works 99%

Sean Mooney smooney at redhat.com
Wed Feb 16 18:38:03 UTC 2022


On Wed, 2022-02-16 at 18:04 +0000, Derek O keeffe wrote:
>  Hi, Laurent, Slawek,
> Removing the port security did indeed solve the issue for me so I really appreciate the advice from both of you.
> On your point below Slawek:ML2/Linuxbridge don't supports distributed routing (DVR) at all. What do You mean by "distributed routing" here?"
> We have enabled DVR on the nodes in the following locations:
> plugins/ml2/ml2_conf.ini:enable_distributed_routing = Trueplugins/ml2/linuxbridge_agent.ini:enable_distributed_routing = Trueneutron.conf:router_distributed = True
> 
> We have obviously been mistaken here, we had assumed this was working as the VM's on each compute can continue working fine if the controller is shut down. Would this be a reason that if we spin up a neutron router the interface is always down and we cannot bring it up? We're a little caught on the networking side of things.
> Regards,Derek
> 

linux bridge supprot VRRP HA routering 
https://docs.openstack.org/neutron/latest/admin/deploy-lb-ha-vrrp.html
but ovs syle dvr where each compute node does the routing appears to be unsupported.

i tought we added dvr to linux bridge as part of the vlan support in kilo
 or at least proposed at one point but the docs dont reference it.

looking at the agent config 
https://docs.openstack.org/neutron/latest/configuration/linuxbridge-agent.html
that option does not exist "enable_distributed_routing"

https://docs.openstack.org/neutron/latest/configuration/neutron.html#DEFAULT.router_distributed
https://docs.openstack.org/neutron/latest/configuration/neutron.html#DEFAULT.enable_dvr
are generic neuton toplevl option but i think that just sets the default values so they shoudl also not be set in the agent config.

dvr is implemented by the  l3 agent however and is contold by 
https://docs.openstack.org/neutron/latest/configuration/l3-agent.html#DEFAULT.agent_mode

i had though that if you enable that and deploy the l3 agent on each of the compute nodes with linux bridge
it would still work after all the routeing is impleted by the kernel not ovs when usign dvr so the same namepace approch shoudl work.
but i guess that was never implemnted so your only otpion with linux bridge would he to use ha routers not dvr routers

the main deltaa is for ha routers all routing happnes on the network nodes/contoler where the l3 agent is running rahter then
beign distibuted across all compute nodes.




>      On Tuesday 15 February 2022, 09:41:54 GMT, Slawek Kaplonski <skaplons at redhat.com> wrote:  
>  
>  Hi,
> 
> On piątek, 11 lutego 2022 20:31:24 CET Laurent Dumont wrote:
> > You might want to look at port-security if you are using an Openstack VM as
> > more of a router. By default, it will permit only it's own mac-address +
> > ip-address from exiting the interface.
> > 
> > You can fully disable it to see if it's the root cause.
> > 
> >     1. Remove allowed-address-pairs.
> >     2. Remove security-groups
> >     3. Disable port-security.
> 
> It is very likely that the issue is caused by the port security on the 
> internal interface of the external vm (where packets are dropped).
> 
> > 
> > 
> > On Thu, Feb 10, 2022 at 11:17 AM Derek O keeffe <derekokeeffe85 at yahoo.ie>
> > 
> > wrote:
> > > Hi all,
> > > 
> > > We have an openstack cluster with one controller and 4 computes (Victoria)
> > > we have set it up using vlan provider networks with linuxbridge agent,
> > > distributed routing & ml2 (I am only partly on the networking so there
> > > could be more to that which I can find out if needed)
> 
> ML2/Linuxbridge don't supports distributed routing (DVR) at all. What do You 
> mean by "distributed routing" here?
> 
> > > 
> > > So I was tasked with creating two Instances, one (lets call it the
> > > external vm) with an external interface 10.55.9.67 and internal interface
> > > 192.168.1.2. A second instance (lets call it the internal vm) would then 
> be
> > > placed on the 192.168.1.0 network.
> > > 
> > > I configured masquerading on the "external vm" and tried to ping the
> > > outside world from the "internal" vm as per something like this
> > > https://kifarunix.com/configure-ubuntu-20-04-as-linux-router/?unapproved=49
> > > 571&moderation-hash=b5168c04420557dcdc088994ffa4bdbb#comment-49571
> > > 
> > > 
> > > Both VM's were instantiated on the same compute host (I've tried it with
> > > them on separate hosts as well).
> > > 
> > > I can see the ping leave using tcpdumps along the way and it makes it all
> > > the way back to the internal interface on the external machine. It just
> > > fails on the last hop to the internal machine. I've tried everything in my
> > > power to find why this won't work so I would be grateful for any advice at
> > > all. I have added the below to show how I followed the ping manually and
> > > where it went and when it failed. Thank you in advance.
> > > 
> > > Following the ping from source to destination and back:
> > > Generated on the private VM
> > > sent to the internal interface on the external vm
> > > sent to the external interface on the external vm
> > > sent to the tap interface on the compute
> > > sent to the physical nic on the compute
> > > sent to the nic on the network device out to the internet
> > > 
> > > received on nic on the network devicefrom the internet
> > > received on physical nic on the compute
> > > received on tap interface on compute
> > > received on external interface on the external vm
> > > received on the internal interface on the external vm
> > > NEVER gets to last step on the internal vm
> > > 
> > > Regards,
> > > Derek
> 
> 
> 




More information about the openstack-discuss mailing list