[openstack-dev] [Neutron][LBaaS] Updated Use Cases Assessment and Questions

Stephen Balukoff sbalukoff at bluebox.net
Fri May 2 01:13:26 UTC 2014


Hi Trevor,

Some of these use cases are mine, I will try to clarify the ones that are
in-line:


On Thu, May 1, 2014 at 9:20 AM, Trevor Vardeman <
trevor.vardeman at rackspace.com> wrote:

>
> Use-Case 10:  I assumed this was referring to the source-IP that
> accesses the Load Balancer.  As far as I know the X-Forwarded-For header
> includes this.  To satisfy this use-case, was there some expectation to
> retrieve this information through an API request?  Also, with the
> trusted-proxy evaluation, is that being handled by the pool member, or
> was this in reference to an "access list" so-to-speak defined on the
> load balancer?
>

Actually, this would be the source IP of the load balancer itself.  That is
to say, any client on the internet can insert an X-Forwarded-For header
which, with the right server configuration, may cause an application
attribute their actions to some other IP on the internet. To solve this
potential security problem, a lot of web application software will only
trust the X-Forwarded-For header if the request comes from a trusted proxy.
So, in order for the back-end application to know which IPs constitute this
group of "trusted proxies" (and therefore, which requests it can trust the
X-Forwarded-For header in), the application needs to have some way to know
what IPs the trusted proxies will be using to originate requests. (More
info on how this works is here: http://en.wikipedia.org/wiki/X-Forwarded-For)

In the case of LBaaS, there are a couple ways to handle this problem:

   1. Provide an API interface that a user can use to get a list of the
   possible source IPs for a given load balancer configuration. This is
   somewhat problematic, because this list might change without notice, and
   therefore the back-end application is going to have to check this with some
   regularity.
   2. Make sure that the load balancer also originates requests to the
   back-end from the VIP IP(s). This works pretty well for medium-sized
   deployments, but may break when moving to an active-active topology (ie. if
   each load balancer originating requests then needs to originate these
   requests from a unique IP.)

Does that clear things up a bit?



> Use-Case 20:  I do not believe much of this is handled within the LBaaS
> API, but with a different service that provides auto-scaling
> functionality.  Especially the "on-the-fly" updating of properties.
> This also becomes incredibly difficult when considering TCP session
> persistence when the possible pool member could be removed at any
> automated time.
>

This is an example of how one might handle SSH load balancing to an array
of back-end servers. It's somewhat contrived in that these were the
parameters that a potential client inquired about with us, but that we
couldn't at that time deliver in our load balancing infrastructure.

Is anyone else doing this kind of (rather convoluted) load balancing? If
not, obviously feel free to strike this one down as unnecessary in the
up-coming survey. :)


> Use-Case 25:  I think this one is referring to the functionality of a
> "draining" status for a pool member; the pool member will not receive
> any new connections, and will not force any active connection closed.
> Is that the right way to understand that use-case?
>

This was meant to be more of a "continuous deployment" or "rolling
deployment" use case.


> Use-Case 26:  Is this functionally wanting something like an "error
> page" to come up during the maintenance window?  Also, to accept only
> connections from a specific set of IPs only during the maintenance
> window, one would manually have to create an access list for the load
> balancer during the time for testing, and then either modify or remove
> it after maintenance is complete.  Does this sound like an accurate
> understanding/solution?
>

Correct-- we've seen this a number of times from our customers:  They want
a 'maintenance page' to show up for anyone connecting to the service except
their own people during a maintenance window. Having the ability of their
own people hitting the site is actually really important because they need
to make sure that the deployment went well and the site is ready for
production traffic before they open up the flood gates again. If they make
the site generally accessible too early (ie. there was still a problem that
could have been detected with testing if their people could have tested)
this has the potential of introducing bad data into their database that's
impossible to root out afterward.

Just denying connections to the general public (ie. dropping packets or
returning 'connection refused' as a firewall would do) is not acceptable in
these kinds of scenarios to these customers (ie. it's unprofessional to not
show a maintenance page.)


> Use-Case 37:  I'm not entirely sure what this one would mean.  I know I
> included it in the section that sounded more like features, but I was
> still curious what this one referred to.  Does this have to do with the
> desire for auto-scaling?  When a pool member gains a certain threshold
> of connections another pool member is created or chosen to handle the
> next connection(s) as they come?
>

Well, this one didn't come from me, but I think I know what it means:  By
'backup servers' I think they're probably talking about how that
terminology is used in haproxy. In haproxy, any member of a pool marked as
a 'backup' will not be used unless all other members of the pool (which
aren't marked backup) are unavailable. This could be used instead of
showing an 'error 503' page, or to serve a reduced-functionality version of
the site or whatever. In other words, this is an option to more gracefully
handle site overload situations.


>
> Please feel free to correct me anywhere I've blundered here, and if my
> proposed "solution" is inaccurate or not easily understood, I'd be more
> than happy to explain in further detail.  Thanks for any help you can
> offer!
>
> -Trevor Vardeman
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140501/990cbd9c/attachment.html>


More information about the OpenStack-dev mailing list