[openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

Stephen Balukoff sbalukoff at bluebox.net
Wed Dec 10 10:50:24 UTC 2014


Hi Keshava,

For the purposes of Octavia, it's going to be service VMs (or containers or
what have you). However, service VM or tenant VM the concept is roughly
similar:  We need some kind of layer-3 routing capability which works
something like Neutron floating IPs (though not just a static NAT in this
case) but which can distribute traffic to a set of back-end VMs running on
a Neutron network according to some predictable algorithm (probably a
distributed hash).

The idea behind ACTIVE-ACTIVE is that you have many service VMs (we call
them amphorae) which service the same "public" IP in some way-- this allows
for horizontal scaling of services which need it (ie. anything which does
TLS termination with a significant amount of load).

Does this make sense to you?

Thanks,
Stephen


On Mon, Dec 8, 2014 at 9:56 PM, A, Keshava <keshava.a at hp.com> wrote:

>  Stephen,
>
>
>
> Interesting to know what is “ACTIVE-ACTIVE topology of load balancing VMs”.
>
> What is the scenario is it Service-VM (of NFV) or Tennant VM ?
>
> Curious to know the background of this thoughts .
>
>
>
> keshava
>
>
>
>
>
> *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> *Sent:* Tuesday, December 09, 2014 7:18 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
> solicitation and collaboration
>
>
>
> For what it's worth, I know that the Octavia project will need something
> which can do more advanced layer-3 networking in order to deliver and
> ACTIVE-ACTIVE topology of load balancing VMs / containers / machines.
> That's still a "down the road" feature for us, but it would be great to be
> able to do more advanced layer-3 networking in earlier releases of Octavia
> as well. (Without this, we might have to go through back doors to get
> Neutron to do what we need it to, and I'd rather avoid that.)
>
>
>
> I'm definitely up for learning more about your proposal for this project,
> though I've not had any practical experience with Ryu yet. I would also
> like to see whether it's possible to do the sort of advanced layer-3
> networking you've described without using OVS. (We have found that OVS
> tends to be not quite mature / stable enough for our needs and have moved
> most of our clouds to use ML2 / standard linux bridging.)
>
>
>
> Carl:  I'll also take a look at the two gerrit reviews you've linked. Is
> this week's L3 meeting not happening then? (And man-- I wish it were an
> hour or two later in the day. Coming at y'all from PST timezone here.)
>
>
>
> Stephen
>
>
>
> On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin <carl at ecbaldwin.net> wrote:
>
> Ryan,
>
> I'll be traveling around the time of the L3 meeting this week.  My
> flight leaves 40 minutes after the meeting and I might have trouble
> attending.  It might be best to put it off a week or to plan another
> time -- maybe Friday -- when we could discuss it in IRC or in a
> Hangout.
>
> Carl
>
>
> On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
> <ryan.clevenger at rackspace.com> wrote:
> > Thanks for getting back Carl. I think we may be able to make this weeks
> > meeting. Jason Kölker is the engineer doing all of the lifting on this
> side.
> > Let me get with him to review what you all have so far and check our
> > availability.
> >
> > ________________________________________
> >
> > Ryan Clevenger
> > Manager, Cloud Engineering - US
> > m: 678.548.7261
> > e: ryan.clevenger at rackspace.com
> >
> > ________________________________
> > From: Carl Baldwin [carl at ecbaldwin.net]
> > Sent: Sunday, December 07, 2014 4:04 PM
> > To: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
> solicitation
> > and collaboration
> >
> > Ryan,
> >
> > I have been working with the L3 sub team in this direction.  Progress has
> > been slow because of other priorities but we have made some.  I have
> written
> > a blueprint detailing some changes needed to the code to enable the
> > flexibility to one day run glaring ups on an l3 routed network [1].
> Jaime
> > has been working on one that integrates ryu (or other speakers) with
> neutron
> > [2].  Dvr was also a step in this direction.
> >
> > I'd like to invite you to the l3 weekly meeting [3] to discuss further.
> I'm
> > very happy to see interest in this area and have someone new to
> collaborate.
> >
> > Carl
> >
> > [1] https://review.openstack.org/#/c/88619/
> > [2] https://review.openstack.org/#/c/125401/
> > [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
> >
> > On Dec 3, 2014 4:04 PM, "Ryan Clevenger" <ryan.clevenger at rackspace.com>
> > wrote:
> >>
> >> Hi,
> >>
> >> At Rackspace, we have a need to create a higher level networking service
> >> primarily for the purpose of creating a Floating IP solution in our
> >> environment. The current solutions for Floating IPs, being tied to
> plugin
> >> implementations, does not meet our needs at scale for the following
> reasons:
> >>
> >> 1. Limited endpoint H/A mainly targeting failover only and not
> >> multi-active endpoints,
> >> 2. Lack of noisy neighbor and DDOS mitigation,
> >> 3. IP fragmentation (with cells, public connectivity is terminated
> inside
> >> each cell leading to fragmentation and IP stranding when cell
> CPU/Memory use
> >> doesn't line up with allocated IP blocks. Abstracting public
> connectivity
> >> away from nova installations allows for much more efficient use of those
> >> precious IPv4 blocks).
> >> 4. Diversity in transit (multiple encapsulation and transit types on a
> per
> >> floating ip basis).
> >>
> >> We realize that network infrastructures are often unique and such a
> >> solution would likely diverge from provider to provider. However, we
> would
> >> love to collaborate with the community to see if such a project could be
> >> built that would meet the needs of providers at scale. We believe that,
> at
> >> its core, this solution would boil down to terminating north<->south
> traffic
> >> temporarily at a massively horizontally scalable centralized core and
> then
> >> encapsulating traffic east<->west to a specific host based on the
> >> association setup via the current L3 router's extension's 'floatingips'
> >> resource.
> >>
> >> Our current idea, involves using Open vSwitch for header rewriting and
> >> tunnel encapsulation combined with a set of Ryu applications for
> management:
> >>
> >> https://i.imgur.com/bivSdcC.png
> >>
> >> The Ryu application uses Ryu's BGP support to announce up to the Public
> >> Routing layer individual floating ips (/32's or /128's) which are then
> >> summarized and announced to the rest of the datacenter. If a particular
> >> floating ip is experiencing unusually large traffic (DDOS, slashdot
> effect,
> >> etc.), the Ryu application could change the announcements up to the
> Public
> >> layer to shift that traffic to dedicated hosts setup for that purpose.
> It
> >> also announces a single /32 "Tunnel Endpoint" ip downstream to the
> TunnelNet
> >> Routing system which provides transit to and from the cells and their
> >> hypervisors. Since traffic from either direction can then end up on any
> of
> >> the FLIP hosts, a simple flow table to modify the MAC and IP in either
> the
> >> SRC or DST fields (depending on traffic direction) allows the system to
> be
> >> completely stateless. We have proven this out (with static routing and
> >> flows) to work reliably in a small lab setup.
> >>
> >> On the hypervisor side, we currently plumb networks into separate OVS
> >> bridges. Another Ryu application would control the bridge that handles
> >> overlay networking to selectively divert traffic destined for the
> default
> >> gateway up to the FLIP NAT systems, taking into account any configured
> >> logical routing and local L2 traffic to pass out into the existing
> overlay
> >> fabric undisturbed.
> >>
> >> Adding in support for L2VPN EVPN
> >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
> >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03)
> to the
> >> Ryu BGP speaker will allow the hypervisor side Ryu application to
> advertise
> >> up to the FLIP system reachability information to take into account VM
> >> failover, live-migrate, and supported encapsulation types. We believe
> that
> >> decoupling the tunnel endpoint discovery from the control plane
> >> (Nova/Neutron) will provide for a more robust solution as well as allow
> for
> >> use outside of openstack if desired.
> >>
> >> ________________________________________
> >>
> >> Ryan Clevenger
> >> Manager, Cloud Engineering - US
> >> m: 678.548.7261
> >> e: ryan.clevenger at rackspace.com
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141210/1ddf0b5f/attachment.html>


More information about the OpenStack-dev mailing list