[openstack-dev] [all] devstack changing to neutron by default RSN

Sean Dague sean at dague.net
Fri Aug 5 21:16:10 UTC 2016

On 08/05/2016 04:32 PM, Armando M. wrote:
> On 5 August 2016 at 13:05, Dan Smith <dms at danplanet.com
> <mailto:dms at danplanet.com>> wrote:
>     > I haven't been able to reproduce it either, but it's unclear how packets
>     > would get into a VM on an island since there is no router interface, and
>     > the VM can't respond even if it did get it.
>     >
>     > I do see outbound pings from the connected VM get to eth0, hit the
>     > masquerade rule, and continue on their way.  But those packets get
>     > dropped at my ISP since they're in the 10/8 range, so perhaps something
>     > in the datacenter where this is running is responding?  Grasping at
>     > straws is right until we see the results of Armando's test patch.
>     Right, that's what I was thinking when I said "something with the
>     provider" in my other reply. A provider could potentially always reflect
>     10/8 back at you to eliminate the possibility of ever escaping like
>     that, which would presumably come back, hit the 10.1/20 route that we
>     have and continue on in. I'm not entirely sure why that's not being hit
>     right now (i.e. before this change), but I'm less familiar with the
>     current state of the art than I am this patch.
> Still digging but we have a clean pass in [0]. The multinode setup
> involves br-ex [1,2], I am not quite sure how changing iptables rules
> fiddles with it, if at all.
> [0]
> http://logs.openstack.org/76/351876/1/experimental/gate-tempest-dsvm-neutron-dvr-multinode-full/3a81575/logs/testr_results.html.gz
> [1] https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L1108
> [2] https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L130

So... interesting relevant data which supports Dan and Brian's theory.

The test in question only runs on neutron configurations. Every failure
of the test is on OVH nodes. Every time that test has run not on OVH
nodes, it's passed. http://goo.gl/Sppc72 (logstash results). After the
last failure on the regular job that we had, Dan said we could add a
'-s' flag to be safe, and it looks like it *fixed* it. But the reality
is that it just ran on internap instead. And then when I updated the
commit message, that ran on rax.

OVH networking is kind of unique with the way they give us a /32
address, it's very possible other things in their infrastructure are
causing this reflection.

This would also speak to the fact that our gate tests probably never
produced guests which could actually talk to the outside world. We don't
ever test that they do.The masq rule openned this up for the first time
in our gate as well.


Sean Dague

More information about the OpenStack-dev mailing list