[openstack-dev] [Octavia] Object Model and DB Structure

Stephen Balukoff sbalukoff at bluebox.net
Tue Aug 19 00:49:41 UTC 2014


Hi German,


On Mon, Aug 18, 2014 at 3:10 PM, Eichberger, German <
german.eichberger at hp.com> wrote:

>  No, I mean with VIP the original meaning more akin to a Floating IP…
>
>
I think that's what I was describing below. But in any case, yes-- the
model we are describing should accommodate that.



>
>
> German
>
>
>
> *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> *Sent:* Monday, August 18, 2014 2:43 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Octavia] Object Model and DB Structure
>
>
>
> German--
>
>
>
> By 'VIP' do you mean something roughly equivalent to 'loadbalancer' in the
> Neutron LBaaS object model (as we've discussed in the past)?  That is to
> say, is this thingy a parent object to the Listener in the hierarchy? If
> so, then what we're describing definitely accommodates that.
>
>
>
> (And yes, we commonly see deployments with listeners on port 80 and port
> 443 on the same virtual IP address.)
>
>
>
> Stephen
>
>
>
> On Mon, Aug 18, 2014 at 2:16 PM, Eichberger, German <
> german.eichberger at hp.com> wrote:
>
> Hi Steven,
>
>
>
> In my example we don’t share anything except the VIP J So my motivation
> is if we can have two listeners share the same VIP. Hope that makes sense.
>
>
>
> German
>
>
>
> *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> *Sent:* Monday, August 18, 2014 1:39 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
>
> *Subject:* Re: [openstack-dev] [Octavia] Object Model and DB Structure
>
>
>
> Yes, I'm advocating keeping each listener in a separate haproxy
> configuration (and separate running instance). This includes the example I
> mentioned: One that listens on port 80 for HTTP requests and redirects
> everything to the HTTPS listener on port 443.  (The port 80 listener is a
> simple configuration with no pool or members, and it doesn't take much to
> have it run on the same host as the port 443 listener.)
>
>
>
> I've not explored haproxy's new redirect scheme capabilities in 1.5 yet.
> Though I doubt it would have a significant impact on the operational model
> where each listener is a separate haproxy configuration and instance.
>
>
>
> German: Are you saying that the port 80 listener and port 443 listener
> would have the exact same back-end configuration? If so, then what we're
> discussing here with no sharing of child entities, would mean that the
> customer has to set up and manage these duplicate pools and members. If
> that's not acceptable, now is the time to register that opinion, eh!
>
>
>
> Stephen
>
>
>
> On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan <
> brandon.logan at rackspace.com> wrote:
>
> Hi German,
> I don't think it is a requirement that those two frontend sections (or
> listen sections) have to live in the same config.  I thought if they
> were listening on the same IP but different ports it could be in two
> different haproxy instances.  I could be wrong though.
>
> Thanks,
> Brandon
>
>
> On Mon, 2014-08-18 at 17:21 +0000, Eichberger, German wrote:
> > Hi,
> >
> > My 2 cents for the multiple listeners per load balancer discussion: We
> have customers who like to have a listener on port 80 and one on port 443
> on the same VIP (we had to patch libra to allow two "listeners" in one
> single haproxy) - so having that would be great.
> >
> > I like the proposed status :-)
> >
> > Thanks,
> > German
> >
> > -----Original Message-----
> > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM]
> > Sent: Sunday, August 17, 2014 8:57 PM
> > To: openstack-dev at lists.openstack.org
> > Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
> >
> > Oh hello again!
> >
> > You know the drill!
> >
> > On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
> > > Hi Brandon,
> > >
> > >
> > > Responses in-line:
> > >
> > > On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
> > > <brandon.logan at rackspace.com> wrote:
> > >         Comments in-line
> > >
> > >         On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
> > >         > Hi folks,
> > >         >
> > >         >
> > >         > I'm OK with going with no shareable child entities
> > >         (Listeners, Pools,
> > >         > Members, TLS-related objects, L7-related objects, etc.).
> > >         This will
> > >         > simplify a lot of things (like status reporting), and we can
> > >         probably
> > >         > safely work under the assumption that any user who has a use
> > >         case in
> > >         > which a shared entity is useful is probably also technically
> > >         savvy
> > >         > enough to not only be able to manage consistency problems
> > >         themselves,
> > >         > but is also likely to want to have that level of control.
> > >         >
> > >         >
> > >         > Also, an haproxy instance should map to a single listener.
> > >         This makes
> > >         > management of the configuration template simpler and the
> > >         behavior of a
> > >         > single haproxy instance more predictable. Also, when it
> > >         comes to
> > >         > configuration updates (as will happen, say, when a new
> > >         member gets
> > >         > added to a pool), it's less risky and error prone to restart
> > >         the
> > >         > haproxy instance for just the affected listener, and not for
> > >         all
> > >         > listeners on the Octavia VM. The only down-sides I see are
> > >         that we
> > >         > consume slightly more memory, we don't have the advantage of
> > >         a shared
> > >         > SSL session cache (probably doesn't matter for 99.99% of
> > >         sites using
> > >         > TLS anyway), and certain types of persistence wouldn't carry
> > >         over
> > >         > between different listeners if they're implemented poorly by
> > >         the
> > >         > user. :/  (In other words, negligible down-sides to this.)
> > >
> > >
> > >         This is fine by me for now, but I think this might be
> > >         something we can
> > >         revisit later after we have the advantage of hindsight.  Maybe
> > >         a
> > >         configurable option.
> > >
> > >
> > > Sounds good, as long as we agree on a path forward. In the mean time,
> > > is there anything I'm missing which would be a significant advantage
> > > of having multiple Listeners configured in a single haproxy instance?
> > > (Or rather, where a single haproxy instance maps to a loadbalancer
> > > object?)
> >
> > No particular reason as of now.  Just feel like that could be something
> that could hinder a particular feature or even performance in the future.
> It's not rooted in any fact or past experience.
> >
> > >
> > >         I have no problem with this. However, one thing I often do
> > >         think about
> > >         is that it's not really ever going to be load balancing
> > >         anything with
> > >         just a load balancer and listener.  It has to have a pool and
> > >         members as
> > >         well.  So having ACTIVE on the load balancer and listener, and
> > >         still not
> > >         really load balancing anything is a bit odd.  Which is why I'm
> > >         in favor
> > >         of only doing creates by specifying the entire tree in one
> > >         call
> > >         (loadbalancer->listeners->pool->members).  Feel free to
> > >         disagree with me
> > >         on this because I know this not something everyone likes.  I'm
> > >         sure I am
> > >         forgetting something that makes this a hard thing to do.  But
> > >         if this
> > >         were the case, then I think only having the provisioning
> > >         status on the
> > >         load balancer makes sense again.  The reason I am advocating
> > >         for the
> > >         provisioning status on the load balancer is because it still
> > >         simpler,
> > >         and only one place to look to see if everything were
> > >         successful or if
> > >         there was an issue.
> > >
> > >
> > > Actually, there is one case where it makes sense to have an ACTIVE
> > > Listener when that listener has no pools or members:  Probably the 2nd
> > > or 3rd most common type of "load balancing" service we deploy is just
> > > an HTTP listener on port 80 that redirects all requests to the HTTPS
> > > listener on port 443. While this can be done using a (small) pool of
> > > back-end servers responding to the port 80 requests, there's really no
> > > point in not having the haproxy instance do this redirect directly for
> > > sites that want all access to happen over SSL. (For users that want
> > > them we also insert HSTS headers when we do this... but I digress. ;)
> > > )
> > >
> > >
> > > Anyway, my point is that there is a common production use case that
> > > calls for a listener with no pools or members.
> >
> > Yeah we do HTTPS redirect too (or HTTP redirect as I would call it...I
> could digress myself).  I don't think its common for our customers, but it
> obviously should still be supported.  Also, wouldn't that break the only
> one listener per instance rule? Also also, I think haproxy 1.5 has
> "redirect scheme" option that might do away with the extra frontend
> section.  I could be wrong though.
> >
> > >
> > >
> > >         Again though, what you've proposed I am entirely fine with
> > >         because it
> > >         works great with having to create a load balancer first, then
> > >         listener,
> > >         and so forth.  It would also work fine with a single create
> > >         call as
> > >         well.
> > >
> > >
> > > We should probably create more formal API documentation, eh. :)  (Let
> > > me pull up my drafts from 5 months ago...)
> >
> > What I'm hoping the API will look like will be different than those
> drafts, similar though.  So they're probably a good starting point.
> > Then again the neutron lbaas api google doc is probably a good one too.
> >
> > >
> > >         >
> > >         > I don't think that these kinds of status are useful /
> > >         appropriate for
> > >         > Pool, Member, Healthmonitor, TLS certificate id, or L7
> > >         Policy / Rule
> > >         > objects, as ultimately this boils down to configuration
> > >         lines in an
> > >         > haproxy config somewhere, and really the Listener status is
> > >         what will
> > >         > be affected when things are changed.
> > >
> > >
> > >         Total agreement on this.
> > >         >
> > >         > I'm basically in agreement with Brandon on his points with
> > >         operational
> > >         > status, though I would like to see these broken out into
> > >         their various
> > >         > meanings for the different object types. I also think some
> > >         object
> > >         > types won't need an operational status (eg. L7 Policies,
> > >         > healthmonitors, etc.) since these essentially boil down to
> > >         lines in an
> > >         > haproxy configuration file.
> > >
> > >
> > >         Yeah I was thinking could be more descriptive status names for
> > >         the load
> > >         balancer and listener statuses.  I was thinking load balancer
> > >         could have
> > >         PENDING_VIP_CREATE/UPDATE/DELETE, but then that'd be painting
> > >         us into a
> > >         corner.  More general is needed.  With that in mind, the
> > >         generic
> > >         PENDING_CREATE/UPDATE/DELETE is adequate enough as long as the
> > >         docs
> > >         explain what they mean for each object clearly.
> > >
> > >
> > > Right. Let's get this documented. :) Or rather-- let's get drafts of
> > > this documentation going in gerrit so people can give specific
> > > feedback.  (I'm happy to work on this, so long as I'm not a blocker on
> > > anything else-- I want to make sure anyone who wants to put time into
> > > the Octavia project knows how they can be useful, eh. It's a major pet
> > > peeve of mine to find out after the fact that somebody was waiting on
> > > something for me, and that this was a blocker for them being
> > > productive.)
> >
> > I like your documentation skills and attention to detail.  If you don't
> mind doing it, unless someone else wants something to do.
> >
> > >
> > > Stephen
> > >
> > >
> > >
> > > --
> > > Stephen Balukoff
> > > Blue Box Group, LLC
> > > (800)613-4305 x807
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140818/b3a0cbac/attachment.html>


More information about the OpenStack-dev mailing list