[openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT
Kevin Benton
blak111 at gmail.com
Thu Feb 19 02:12:43 UTC 2015
If I understand correctly, for southbound traffic there would be
hair-pinning via the L3 agent that the upstream router happened to pick out
of the ECMP group since it doesn't know where the hypervisors are. On the
other hand northbound traffic could egress directly (assuming an l3 agent
is running on each compute node in the DVR fashion).
If we went down this route, we would require a dynamic routing protocol to
run between the agents and the upstream router. Additionally, we would have
to tweak our addressing scheme a bit so the l3 agents could have separate
addresses to use for their BGP session (or whatever routing protocol we
choose) since the gateway address would be shared amongst them.
Did I get what you were proposing correctly?
On Wed, Feb 18, 2015 at 5:28 PM, Angus Lees <gus at inodes.org> wrote:
> On Mon Feb 16 2015 at 9:37:22 PM Kevin Benton <blak111 at gmail.com> wrote:
>
>> >It's basically very much like floating IPs, only you're handing out a
>> sub-slice of a floating-IP to each machine - if you like.
>>
>> This requires participation of the upstream router (L4 policy routing
>> pointing to next hops that distinguish each L3 agent) or intervention on
>> the switches between the router an L3 agents (a few OpenFlow rules would
>> make this simple). Both approaches need to adapt to L3 agent changes so
>> static configuration is not adequate. Unfortunately, both of these are
>> outside of the control of Neutron so I don't see an easy way to push this
>> state in a generic fashion.
>>
>
> (Just to continue this thought experiment)
>
> The L3 agents that would need to forward ingress traffic to the right
> hypervisor only need to know which [IP+port range] has been assigned to
> which hypervisor. This information is fairly static, so these forwarders
> are effectively stateless and can be trivially replicated to deal with the
> desired ingress volume and reliability.
>
> When I've built similar systems in the past, the easy way to interface
> with the rest of the provider network was to use whatever dynamic routing
> protocol was already in use, and just advertise multiple ECMP routes for
> the SNAT source IPs from the forwarders (ideally advertising from the
> forwarders themselves, so they stop if there's a connectivity issue). All
> the "cleverness" then happens on the forwarding hosts (we could call them
> "L3 agents"). It's simple and works well, but I agree we have no precedent
> in neutron at present.
>
> On Mon, Feb 16, 2015 at 12:33 AM, Robert Collins <
>> robertc at robertcollins.net> wrote:
>>>
>>> Or a pool of SNAT addresses ~= to the size of the hypervisor count.
>>
>>
> Oh yeah. If we can afford to assign a unique SNAT address per hypervisor
> then we're done - at that point it really is just like floating-ips.
>
> - Gus
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150218/caf7f610/attachment.html>
More information about the OpenStack-dev
mailing list