[Openstack] DVR and public IP consumption

Matt Kassawara mkassawara at gmail.com
Fri Jan 22 14:05:23 UTC 2016


Do you need project/private networks? If so, do those networks need to
route to provider/public networks? For a number of reasons, most of which
seem to revolve around the concept of floating IP addresses and neutron
routing performance/complexity issues, providers often attach VMs directly
to provider networks and support creation of project networks so customers
can move data internally among their VMs. Additionally, you can support
project routers that only route between project networks belonging to a
particular project and should therefore avoid performance issues on
conventional routers.

On Fri, Jan 22, 2016 at 6:25 AM, Tom Verdaat <tom at server.biz> wrote:

> Similar situation here: public provider, hence using the provider
> networking deployment scenario
>
> In the future I'm hoping some developers will dismantle the networking
> node concept completely for DVR and allow us to move all those components
> (snat, dhcp, vpn, metadata) to compute nodes for true linear scalability....
>
> The floating IP usage will also fix itself when we switch to IPv6, which
> doesn't really have a scarcity problem. Unfortunately this is somewhat
> outside of our span of control as hosting providers (lagging ISP's are key
> here) and will take a while!
>
> Disabling SNAT when using a floating IP might work in theory but there are
> a lot of issues. No problem when you only have one instance or when every
> instance has a floating IP, but I don't think it will work if you have
> multiple instances on one virtual network and some have to use the floating
> IP that is assigned to an other instance as it's SNAT gateway. It could
> work in theory I guess, and would be a cool feature, if engineered
> properly. I wouldn't want to go down this road using a hack.
>
> Guess we're just going to have to accept for now that using DVR will cost
> us a couple of extra IP's...
>
>
>
>
> 2016-01-22 11:29 GMT+01:00 Tomas Vondra <vondra at czech-itc.cz>:
>
>>
>> Tomas Vondra <vondra at ...> writes:
>>
>> >
>> > James Denton <james.denton <at> ...> writes:
>> >
>>
>> To not let the discussion die:
>> There is a parallel discussion on the Operators list about a very similar
>> topic - how to recycle the IP address of the router External Gateway if
>> SNAT
>> is turned off. I would consider doing that if it saved half of my IP
>> addresses. Unfortunately, the router's External IP is still allocated in
>> that case. The Midonet mailing list mentions a hack in Neutron that could
>> recycle it for Floating IPs. Does anyone know what that hack is?
>> https://lists.midonet.org/pipermail/midonet-dev/2015-January/000314.html
>>
>> I cannot really use the topology with one shared router, because I am a
>> public service provider and cannot dictate users what subnets to use. I
>> also
>> cannot use a private address space with 1:1 NAT for Floating IPs, because
>> the customers would not understand that. They don't understand the
>> Floating
>> IP concept as it is :-).
>>
>> And as a second idea, if the SNAT namespace has to have an IP address,
>> and I
>> have tens of them on one network node, could these be allocated from a
>> private IP pool and not he public one? The datacenter SNAT would take care
>> of OpenStack's SNAT needs while retaining visibility into which tenant
>> accessed what site. The Floating IPs are in a totally different qr
>> namespace
>> on compute nodes, so nothing would prevent that. Except the Neutron API,
>> which will probably not let me attach Floating IPs if the router gateway
>> is
>> not in the same subnet.
>> Tomas
>>
>>
>> Tomas Vondra <vondra at ...> writes:
>>
>> >
>> > James Denton <james.denton <at> ...> writes:
>> >
>> > >
>> > >
>> > >
>> > > Hi,
>> > >
>> > > >> You cannot get around each tenant gateway router consuming an extra
>> > public IP address itself as far as I know.
>> > >
>> > > Almost. With DVR, a FIP namespace is created on compute nodes, with
>> one
>> > FIP namespace per external network. The FIP namespace owns an IP address
>> > from the external provider network, and all tenant routers connected to
>> the
>> > same external network on the same
>> > >  node connect to the respective FIP namespace via veth pair. It is
>> > possible that all compute nodes could each have a FIP namespace
>> connected to
>> > the same external network, which would certainly reduce the number of
>> IPs
>> > available, but it beats having to give
>> > >  each tenant router an IP. There is some NAT/routing/Proxy ARP magic
>> that
>> > goes into making this config work. Assaf’s blog is a great resource for
>> that
>> > info.
>> > >
>> > > James
>> >
>> > Very well, I don't really understand the point for taking a public
>> address
>> > on the compute node for the FIP namespace, when the Floating IPs are
>> created
>> > in the QROUTER namespaces and these are bridged to the real network
>> using
>> > OpenVSwitch. But I can live with that.
>> >
>> > But anyway - my router entries in "neutron router-list" look like this:
>> > id | name| external_gateway_info | distributed | ha
>> > ba8c8b17-5649-474b-ac81-4960c2358611 | admin-router  | {"network_id":
>> > "5e9b25cf-ee67-48ac-be9b-79cd274fd25d", "enable_snat": true,
>> > "external_fixed_ips": [{"subnet_id":
>> "9ff34ad0-dfa2-44df-99b4-dc1a97bdb603",
>> > "ip_address": "< X.X.X.X public IP>"}]} | True | False
>> >
>> > the public IP is a pingable IP that resides on the network node in a
>> SNAT
>> > namespace. There is one such namespace per virtual router. Is there any
>> > magic to reduce the number of these?
>> > Vondra
>> >
>> > >
>> > >
>> > >
>> > > From: Tom Verdaat <tom <at> server.biz>Date: Wednesday, January 20,
>> 2016
>> > at 9:02 AMTo: "openstack <at> lists.openstack.org" <openstack <at>
>> > lists.openstack.org>Subject: Re: [Openstack] DVR and public IP
>> consumption
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Hi Tomas,
>> > >
>> > > Actually the networking nodes, and in a DVR scenario the compute
>> nodes,
>> > don't need a public IP assigned to the node itself. All they need is a
>> > networking interface connected to the "public" network. Only tenant
>> routers
>> > set as a gateway consume one public IP
>> > >  address each as overhead. You cannot get around each tenant gateway
>> > router consuming an extra public IP address itself as far as I know.
>> > >
>> > > Does that answer your question?
>> > >
>> > > Cheers,
>> > >
>> > > Tom
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > 2016-01-20 13:48 GMT+01:00 Tomas Vondra
>> > > <vondra <at> czech-itc.cz>:
>> > > Hi!
>> > > I have just deployed an OpenStack Kilo installation with DVR and
>> expected
>> > > that it will consume one Public IP per network node as
>> >
>> perhttp://
>> assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/,
>> > > but it still eats one per virtual Router.
>> > > What is the correct behavior?
>> > > Otherwise, it works as a DVR should according to documentation. There
>> are
>> > > router namespaces at both compute and network nodes, snat namespaces
>> at the
>> > > network nodes and fip namespaces at the compute nodes. Every router
>> has a
>> > > router_interface_distributed and a router_centralized_snat with
>> private IPs,
>> > > however the router_gateway has a public IP, which I would like to
>> getr id of
>> > > to increase density.
>> > > Thanks
>>
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160122/9d96f3b1/attachment.html>


More information about the Openstack mailing list