[openstack-dev] [Octavia] Object Model and DB Structure

Stephen Balukoff sbalukoff at bluebox.net
Sat Aug 16 00:18:46 UTC 2014


Hi folks,

I'm OK with going with no shareable child entities (Listeners, Pools,
Members, TLS-related objects, L7-related objects, etc.). This will simplify
a lot of things (like status reporting), and we can probably safely work
under the assumption that any user who has a use case in which a shared
entity is useful is probably also technically savvy enough to not only be
able to manage consistency problems themselves, but is also likely to want
to have that level of control.

Also, an haproxy instance should map to a single listener. This makes
management of the configuration template simpler and the behavior of a
single haproxy instance more predictable. Also, when it comes to
configuration updates (as will happen, say, when a new member gets added to
a pool), it's less risky and error prone to restart the haproxy instance
for just the affected listener, and not for all listeners on the Octavia
VM. The only down-sides I see are that we consume slightly more memory, we
don't have the advantage of a shared SSL session cache (probably doesn't
matter for 99.99% of sites using TLS anyway), and certain types of
persistence wouldn't carry over between different listeners if they're
implemented poorly by the user. :/  (In other words, negligible down-sides
to this.)

Other upsides: This allows us to set different "global" haproxy settings
differently per listener as appropriate. (ex. It might make sense to have
one of the several forms of keepalive enabled for the TERMINATED_HTTPS
listener for performance reasons, but disable keepalive for the HTTP
listener for different performance reasons.)

I do want to note though, that this also affects the discussion on statuses:

On the statuses:  If we're using a separate haproxy instance per listener,
I think that probably both the loadbalancer and listener objects have
different needs here that are appropriate. Specifically, this is what I'm
thinking, regarding the statuses and what they mean:

Loadbalancer:
  PENDING_CREATE: VIP address is being assigned (reserved, or put on a
port) in Neutron, or is being allocated on Octavia VMs.
  ACTIVE: VIP address is up and running on at least one Octavia VM (ex. a
ping check would succeed, assuming no blocking firewall rules)
  PENDING_DELETE: VIP address is being removed from Octavia VM(s) and
reservation in Neutron released
 (Is there any need for a PENDING_UPDATE status for a loadbalancer?
Shouldn't the vip_address be immutable after it's created?)

Listener:
 PENDING_CREATE: A new Listener haproxy configuration is being created on
Octavia VM(s)
 PENDING_UPDATE: An existing Listener haproxy configuration is being
updated on Octavia VM(s)
 PENDING_DELETE: Listener haproxy configuration is about to be deleted off
associated Octavia VM(s)
 ACTIVE: haproxy Listener is up and running (ex. responds to TCP SYN check).

I don't think that these kinds of status are useful / appropriate for Pool,
Member, Healthmonitor, TLS certificate id, or L7 Policy / Rule objects, as
ultimately this boils down to configuration lines in an haproxy config
somewhere, and really the Listener status is what will be affected when
things are changed.

I'm basically in agreement with Brandon on his points with operational
status, though I would like to see these broken out into their various
meanings for the different object types. I also think some object types
won't need an operational status (eg. L7 Policies, healthmonitors, etc.)
since these essentially boil down to lines in an haproxy configuration file.

Does this make sense?

Stephen



On Fri, Aug 15, 2014 at 3:10 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Yeah, need details on that.  Maybe he's talking about having haproxy
> listen on many ips and ports, each one being a separate front end
> section and in the haproxy config with each mapped to its own
> default_backend.
>
> Even if that is the case, the load balancer + listener woudl still make
> up one of those frontends so the mapping would still be correct.
> Though, maybe a different structure would make more sense if that is the
> case.
>
> Also, I've created a WIP review of the initial database structure:
> https://review.openstack.org/#/c/114671/
>
> Added my own comments so everyone please look at that.  Stephen, if you
> could comment on what German mentioned that'd be great.
>
> Have a good weekend!
>
> -Brandon
>
> On Fri, 2014-08-15 at 20:34 +0000, Eichberger, German wrote:
> > --Basically no shareable entities.
> > +1
> >
> > That will make me insanely happy :-)
> >
> > Regarding Listeners: I was assuming that a LoadBalancer would map to an
> haproxy instance - and a listener would be part of that haproxy. But I
> heard Stephen say that this so not so clear cut. So maybe listeners map to
> haproxy instances...
> >
> > German
> >
> > -----Original Message-----
> > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM]
> > Sent: Thursday, August 14, 2014 10:17 PM
> > To: openstack-dev at lists.openstack.org
> > Subject: [openstack-dev] [Octavia] Object Model and DB Structure
> >
> > So I've been assuming that the Octavia object model would be an exact
> copy of the neutron lbaas one with additional information for Octavia.
> > However, after thinking about it I'm not sure this is the right way to
> go because the object model in neutron lbaas may change in the future, and
> Octavia can't just change it's object model when neutron lbaas/openstack
> lbaas changes it's object model.  So if there are any lessons learned we
> would like to apply to Octavia's object model now is the time.
> >
> > Entity name changes are also on the table if people don't really like
> some of the names.  Even adding new entities or removing entities if there
> are good reasons isn't out of the question.
> >
> > Anyway here are a few of my suggestions.  Please add on to this if you
> want.  Also, just flat out tell me I'm wrong on some of htese suggestions
> if you feel as such.
> >
> > A few improvements I'd suggest (using the current entity names):
> > -A real root object that is the only top level object (loadbalancer).
> > --This would be 1:M relationship with Listeners, but Listeners would
> only be children of loadbalancers.
> > --Pools, Members, and Health Monitors would follow the same workflow.
> > --Basically no shareable entities.
> >
> > -Provisioning status only on the root object (loadbalancer).
> > --PENDING_CREATE, PENDING_UPDATE, PENDING_DELETE, ACTIVE (No need for a
> DEFEERRED status! YAY!) --Also maybe a DELETED status.
> >
> > -Operating status on other entities
> > --ACTIVE or ONLINE, DEGRADED, INACTIVE or OFFLINE --Pools and Members
> --Listeners have been mentioned but I'd like to hear more details on that.
> >
> > -Adding status_description field in, or something similar.  Would only
> eixst on loadbalancer entity if loadbalancer is the only top level object.
> >
> > Thanks,
> > Brandon
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140815/45f52e6d/attachment.html>


More information about the OpenStack-dev mailing list