[Openstack-operators] DVR and public IP consumption

Fox, Kevin M Kevin.Fox at pnnl.gov
Thu Jan 28 18:16:24 UTC 2016

Hi Tomas,

The using external addresses per tenant router is a feature to a lot of sites, like ours. We want to know for sure, at minimum, which tenant was responsible for bad activity on the external network. Having the external address tied to a tenant router allows you to track bad activity back at least to the ip, then to the tenant router. You won't be able to tell which vm's of the tenant performed the bad activity because of the snat, but you at least have some to talk to about it, instead of your local security friends asking you to unplug the whole cloud.

From: Tomas Vondra [vondra at czech-itc.cz]
Sent: Thursday, January 28, 2016 3:15 AM
To: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] DVR and public IP consumption

Robert Starmer <robert at ...> writes:

> I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model.  The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting a
tenant's floating IP, and the question then is: Why do I need this _extra_
external address from the floating IP pool for the FIP namespace, since it's
the allocation of a tenant requested floating IP to a tenant VM that
triggers the DVR to implement the FIP namespace function in the first place.
> In both the Paris and Vancouver DVR presentations "We add distributed FIP
support at the expense of an _extra_ external address per device, but the
FIP namespace is then shared across all tenants". Given that there is no
"external" interface for the DVR interface for floating IPs until at least
one tenant allocates one, a new namespace needs to be created to act as the
termination for the tenant's floating IP.  A normal tenant router would have
an address allocated already, because it has a port allocated onto the
external network (this is the address that SNAT overloads for those
non-floating associated machines that lets them communicate with the
Internet at large), but in this case, no such interface exists until the
namespace is created and attached to the external network, so when the
floating IP port is created, an address is simply allocated from the
External (e.g. floating) pool for the interface.  And _then_ the floating IP
is allocated to the namespace as well. The fact that this extra address is
used is a part of the normal port allocation process (and default
port-security anti-spoofing processes) that exist already, and simplifies
the process of moving tenant allocated floating addresses around (the port
state for the floating namespace doesn't change, it keeps it's special mac
and address regardless of what ever else goes on). So don't think of it as a
Floating IP allocated to the DVR, it's just the DVR's local representative
for it's port on the external network.  Tenant addresses are then "on top"
of this setup.
> So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
network mortals, yes.  But that's how I see it. And sorry for the SNAT
reference, just adding my own additional layer of "this is how it should be"
 on top.
> Robert

Dear Robert,
thanks for clarifying why there has to always be an address in the FIP
namespace. But is still feels like a someone left it there from an alpha
phase. If I need AN address, I would use a worthless one like 169.254
link-local one, not a public IP. There are already link-local addresses in
use in Neutron.. somewhere :-).

The IP consumption that bothers me more than this is that of the Router
External Interfaces, which are all on the network nodes and do SNAT for
every tenant, separately. I would like the centralized SNAT of DVR to be
more ..centralized. The quickest way, I think, would be to allocate these
from a different pool than the Floating IPs and let my datacenter SNAT take
care of them.

To your earlier post, some of the Neutron provider implementations are much
more L3-oriented than the default implementation. But I have, for example,
scratched Contrail out of my installation, because the added cost of 2
Juniper or Cisco routers does not balance out the benefits, IMHO. And it is
another complex system besides OpenStack consisting of about 6 components
written in 4 programming languages that you have to take care about.

Would Contrail use less IP addresses per tenant and node? (They will be
worth their weight in gold soon :-).) What about more open-source
open-source SDNs than Contrail, which use software routers at the edge? Is
anyone using Midonet or OpenDaylight?

I personally think that DVR, DragonFlow, or the next integrated Neutron
solution are the ways to go in OpenStack, not some external plugins. But
DVR, as I find out, has its quirks. Which could be solved by introducing a
few more configuration options. I like the way it can use L2 and provider
networks to integrate with the rest of the datacenter. No BGP L3VPN tunnels,
which cannot be done in open-source.


> On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M
<Kevin.Fox-MIjBx5DB8Ok at public.gmane.org> wrote:
> But there already is a second external address, the fip address that's
nating. Is there a double nat? I'm a little confused.
> Thanks,
> Kevin
> From: Robert Starmer [robert-IRoT69HjcZQ at public.gmane.org]Sent: Wednesday,
January 27, 2016 3:20 PMTo: Carl BaldwinCc: OpenStack Operators; Tomas
VondraSubject: Re: [Openstack-operators] DVR and public IP consumption
> You can't get rid of the "External" address as it's used to direct return
traffic to the right router node.  DVR as implemented is really just a local
NAT gateway per physical compute node.  The outside of your NAT needs to be
publicly unique,
>  so it needs it's own address.  Some SDN solutions can provide a truly
distributed router model, because they globally know the inside state of the
NAT environment, and can forward packets back to the internal source
properly, regardless of which distributed
>  forwarder receives the incoming "external" packets.
> If the number of external addresses consumed is an issue, you may consider
the dual gateway HA model instead of DVR.  This uses classic multi-router
models where one router takes on the task of forwading packets, and the
other device just acts as a backup.
>  You do still have a software bottleneck at your router, unless you then
also use one of the plugins that supports hardware L3 (last I checked,
Juniper, Arista, Cisco, etc. all provide an L3 plugin that is HA capable),
but you only burn 3 External addresses
>  for the router (and 3 internal network addresses per tenant side
interface if that matters).
> Hope that clarifies a bit,
> Robert
> On Tue, Jan 26, 2016 at 4:14 AM, Carl Baldwin
> <carl <at> ecbaldwin.net> wrote:
> On Thu, Jan 14, 2016 at 2:45 AM, Tomas Vondra
<vondra-l6WB4nJzLFygjssBaH+rSA at public.gmane.org> wrote:
> > Hi!
> > I have just deployed an OpenStack Kilo installation with DVR and expected
> > that it will consume one Public IP per network node as per
> >
> http://assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/,
> > but it still eats one per virtual Router.
> > What is the correct behavior?Regardless of DVR, a Neutron router burns
one IP per virtual router
> which it uses to SNAT traffic from instances that do not have floating
> IPs.
> When you use DVR, an additional IP is consumed for each compute host
> running an L3 agent in DVR mode.  There has been some discussion about
> how this can be eliminated but no action has been taken to do this.
> > Otherwise, it works as a DVR should according to documentation. There are
> > router namespaces at both compute and network nodes, snat namespaces at the
> > network nodes and fip namespaces at the compute nodes. Every router has a
> > router_interface_distributed and a router_centralized_snat with private IPs,
> > however the router_gateway has a public IP, which I would like to getr id of
> > to increase density.I'm not sure if it is possible to avoid burning
these IPs at this
> time.  Maybe someone else can chime in with more detail.
> Carl

OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org

More information about the OpenStack-operators mailing list