[openstack-dev] [Octavia] Object Model and DB Structure

Stephen Balukoff sbalukoff at bluebox.net
Sat Aug 16 18:42:30 UTC 2014


Hi Brandon,

Responses in-line:

On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Comments in-line
>
> On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
> > Hi folks,
> >
> >
> > I'm OK with going with no shareable child entities (Listeners, Pools,
> > Members, TLS-related objects, L7-related objects, etc.). This will
> > simplify a lot of things (like status reporting), and we can probably
> > safely work under the assumption that any user who has a use case in
> > which a shared entity is useful is probably also technically savvy
> > enough to not only be able to manage consistency problems themselves,
> > but is also likely to want to have that level of control.
> >
> >
> > Also, an haproxy instance should map to a single listener. This makes
> > management of the configuration template simpler and the behavior of a
> > single haproxy instance more predictable. Also, when it comes to
> > configuration updates (as will happen, say, when a new member gets
> > added to a pool), it's less risky and error prone to restart the
> > haproxy instance for just the affected listener, and not for all
> > listeners on the Octavia VM. The only down-sides I see are that we
> > consume slightly more memory, we don't have the advantage of a shared
> > SSL session cache (probably doesn't matter for 99.99% of sites using
> > TLS anyway), and certain types of persistence wouldn't carry over
> > between different listeners if they're implemented poorly by the
> > user. :/  (In other words, negligible down-sides to this.)
>
> This is fine by me for now, but I think this might be something we can
> revisit later after we have the advantage of hindsight.  Maybe a
> configurable option.
>

Sounds good, as long as we agree on a path forward. In the mean time, is
there anything I'm missing which would be a significant advantage of having
multiple Listeners configured in a single haproxy instance? (Or rather,
where a single haproxy instance maps to a loadbalancer object?)


> I have no problem with this. However, one thing I often do think about
> is that it's not really ever going to be load balancing anything with
> just a load balancer and listener.  It has to have a pool and members as
> well.  So having ACTIVE on the load balancer and listener, and still not
> really load balancing anything is a bit odd.  Which is why I'm in favor
> of only doing creates by specifying the entire tree in one call
> (loadbalancer->listeners->pool->members).  Feel free to disagree with me
> on this because I know this not something everyone likes.  I'm sure I am
> forgetting something that makes this a hard thing to do.  But if this
> were the case, then I think only having the provisioning status on the
> load balancer makes sense again.  The reason I am advocating for the
> provisioning status on the load balancer is because it still simpler,
> and only one place to look to see if everything were successful or if
> there was an issue.
>

Actually, there is one case where it makes sense to have an ACTIVE Listener
when that listener has no pools or members:  Probably the 2nd or 3rd most
common type of "load balancing" service we deploy is just an HTTP listener
on port 80 that redirects all requests to the HTTPS listener on port 443.
While this can be done using a (small) pool of back-end servers responding
to the port 80 requests, there's really no point in not having the haproxy
instance do this redirect directly for sites that want all access to happen
over SSL. (For users that want them we also insert HSTS headers when we do
this... but I digress. ;) )

Anyway, my point is that there is a common production use case that calls
for a listener with no pools or members.


>
> Again though, what you've proposed I am entirely fine with because it
> works great with having to create a load balancer first, then listener,
> and so forth.  It would also work fine with a single create call as
> well.
>

We should probably create more formal API documentation, eh. :)  (Let me
pull up my drafts from 5 months ago...)


> >
> > I don't think that these kinds of status are useful / appropriate for
> > Pool, Member, Healthmonitor, TLS certificate id, or L7 Policy / Rule
> > objects, as ultimately this boils down to configuration lines in an
> > haproxy config somewhere, and really the Listener status is what will
> > be affected when things are changed.
>
> Total agreement on this.
> >
> > I'm basically in agreement with Brandon on his points with operational
> > status, though I would like to see these broken out into their various
> > meanings for the different object types. I also think some object
> > types won't need an operational status (eg. L7 Policies,
> > healthmonitors, etc.) since these essentially boil down to lines in an
> > haproxy configuration file.
>
> Yeah I was thinking could be more descriptive status names for the load
> balancer and listener statuses.  I was thinking load balancer could have
> PENDING_VIP_CREATE/UPDATE/DELETE, but then that'd be painting us into a
> corner.  More general is needed.  With that in mind, the generic
> PENDING_CREATE/UPDATE/DELETE is adequate enough as long as the docs
> explain what they mean for each object clearly.
>

Right. Let's get this documented. :) Or rather-- let's get drafts of this
documentation going in gerrit so people can give specific feedback.  (I'm
happy to work on this, so long as I'm not a blocker on anything else-- I
want to make sure anyone who wants to put time into the Octavia project
knows how they can be useful, eh. It's a major pet peeve of mine to find
out after the fact that somebody was waiting on something for me, and that
this was a blocker for them being productive.)

Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140816/e5e9e2ae/attachment.html>


More information about the OpenStack-dev mailing list