Masquerading VM works 99%

Derek O keeffe derekokeeffe85 at
Wed Feb 16 18:04:20 UTC 2022

 Hi, Laurent, Slawek,
Removing the port security did indeed solve the issue for me so I really appreciate the advice from both of you.
On your point below Slawek:ML2/Linuxbridge don't supports distributed routing (DVR) at all. What do You mean by "distributed routing" here?"
We have enabled DVR on the nodes in the following locations:
plugins/ml2/ml2_conf.ini:enable_distributed_routing = Trueplugins/ml2/linuxbridge_agent.ini:enable_distributed_routing = Trueneutron.conf:router_distributed = True

We have obviously been mistaken here, we had assumed this was working as the VM's on each compute can continue working fine if the controller is shut down. Would this be a reason that if we spin up a neutron router the interface is always down and we cannot bring it up? We're a little caught on the networking side of things.

     On Tuesday 15 February 2022, 09:41:54 GMT, Slawek Kaplonski <skaplons at> wrote:  

On piątek, 11 lutego 2022 20:31:24 CET Laurent Dumont wrote:
> You might want to look at port-security if you are using an Openstack VM as
> more of a router. By default, it will permit only it's own mac-address +
> ip-address from exiting the interface.
> You can fully disable it to see if it's the root cause.
>    1. Remove allowed-address-pairs.
>    2. Remove security-groups
>    3. Disable port-security.

It is very likely that the issue is caused by the port security on the 
internal interface of the external vm (where packets are dropped).

> On Thu, Feb 10, 2022 at 11:17 AM Derek O keeffe <derekokeeffe85 at>
> wrote:
> > Hi all,
> > 
> > We have an openstack cluster with one controller and 4 computes (Victoria)
> > we have set it up using vlan provider networks with linuxbridge agent,
> > distributed routing & ml2 (I am only partly on the networking so there
> > could be more to that which I can find out if needed)

ML2/Linuxbridge don't supports distributed routing (DVR) at all. What do You 
mean by "distributed routing" here?

> > 
> > So I was tasked with creating two Instances, one (lets call it the
> > external vm) with an external interface and internal interface
> > A second instance (lets call it the internal vm) would then 
> > placed on the network.
> > 
> > I configured masquerading on the "external vm" and tried to ping the
> > outside world from the "internal" vm as per something like this
> >
> > 571&moderation-hash=b5168c04420557dcdc088994ffa4bdbb#comment-49571
> > 
> > 
> > Both VM's were instantiated on the same compute host (I've tried it with
> > them on separate hosts as well).
> > 
> > I can see the ping leave using tcpdumps along the way and it makes it all
> > the way back to the internal interface on the external machine. It just
> > fails on the last hop to the internal machine. I've tried everything in my
> > power to find why this won't work so I would be grateful for any advice at
> > all. I have added the below to show how I followed the ping manually and
> > where it went and when it failed. Thank you in advance.
> > 
> > Following the ping from source to destination and back:
> > Generated on the private VM
> > sent to the internal interface on the external vm
> > sent to the external interface on the external vm
> > sent to the tap interface on the compute
> > sent to the physical nic on the compute
> > sent to the nic on the network device out to the internet
> > 
> > received on nic on the network devicefrom the internet
> > received on physical nic on the compute
> > received on tap interface on compute
> > received on external interface on the external vm
> > received on the internal interface on the external vm
> > NEVER gets to last step on the internal vm
> > 
> > Regards,
> > Derek

Slawek Kaplonski
Principal Software Engineer
Red Hat  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the openstack-discuss mailing list