[openstack-dev] [Octavia] Object Model and DB Structure
Brandon Logan
brandon.logan at RACKSPACE.COM
Sat Aug 16 04:43:01 UTC 2014
Comments in-line
On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
> Hi folks,
>
>
> I'm OK with going with no shareable child entities (Listeners, Pools,
> Members, TLS-related objects, L7-related objects, etc.). This will
> simplify a lot of things (like status reporting), and we can probably
> safely work under the assumption that any user who has a use case in
> which a shared entity is useful is probably also technically savvy
> enough to not only be able to manage consistency problems themselves,
> but is also likely to want to have that level of control.
>
>
> Also, an haproxy instance should map to a single listener. This makes
> management of the configuration template simpler and the behavior of a
> single haproxy instance more predictable. Also, when it comes to
> configuration updates (as will happen, say, when a new member gets
> added to a pool), it's less risky and error prone to restart the
> haproxy instance for just the affected listener, and not for all
> listeners on the Octavia VM. The only down-sides I see are that we
> consume slightly more memory, we don't have the advantage of a shared
> SSL session cache (probably doesn't matter for 99.99% of sites using
> TLS anyway), and certain types of persistence wouldn't carry over
> between different listeners if they're implemented poorly by the
> user. :/ (In other words, negligible down-sides to this.)
This is fine by me for now, but I think this might be something we can
revisit later after we have the advantage of hindsight. Maybe a
configurable option.
> Other upsides: This allows us to set different "global" haproxy
> settings differently per listener as appropriate. (ex. It might make
> sense to have one of the several forms of keepalive enabled for the
> TERMINATED_HTTPS listener for performance reasons, but disable
> keepalive for the HTTP listener for different performance reasons.)
>
>
> I do want to note though, that this also affects the discussion on
> statuses:
>
>
> On the statuses: If we're using a separate haproxy instance per
> listener, I think that probably both the loadbalancer and listener
> objects have different needs here that are appropriate. Specifically,
> this is what I'm thinking, regarding the statuses and what they mean:
>
>
> Loadbalancer:
> PENDING_CREATE: VIP address is being assigned (reserved, or put on a
> port) in Neutron, or is being allocated on Octavia VMs.
> ACTIVE: VIP address is up and running on at least one Octavia VM
> (ex. a ping check would succeed, assuming no blocking firewall rules)
> PENDING_DELETE: VIP address is being removed from Octavia VM(s) and
> reservation in Neutron released
> (Is there any need for a PENDING_UPDATE status for a loadbalancer?
> Shouldn't the vip_address be immutable after it's created?)
>
>
> Listener:
> PENDING_CREATE: A new Listener haproxy configuration is being created
> on Octavia VM(s)
> PENDING_UPDATE: An existing Listener haproxy configuration is being
> updated on Octavia VM(s)
> PENDING_DELETE: Listener haproxy configuration is about to be deleted
> off associated Octavia VM(s)
> ACTIVE: haproxy Listener is up and running (ex. responds to TCP SYN
> check).
I have no problem with this. However, one thing I often do think about
is that it's not really ever going to be load balancing anything with
just a load balancer and listener. It has to have a pool and members as
well. So having ACTIVE on the load balancer and listener, and still not
really load balancing anything is a bit odd. Which is why I'm in favor
of only doing creates by specifying the entire tree in one call
(loadbalancer->listeners->pool->members). Feel free to disagree with me
on this because I know this not something everyone likes. I'm sure I am
forgetting something that makes this a hard thing to do. But if this
were the case, then I think only having the provisioning status on the
load balancer makes sense again. The reason I am advocating for the
provisioning status on the load balancer is because it still simpler,
and only one place to look to see if everything were successful or if
there was an issue.
Again though, what you've proposed I am entirely fine with because it
works great with having to create a load balancer first, then listener,
and so forth. It would also work fine with a single create call as
well.
>
> I don't think that these kinds of status are useful / appropriate for
> Pool, Member, Healthmonitor, TLS certificate id, or L7 Policy / Rule
> objects, as ultimately this boils down to configuration lines in an
> haproxy config somewhere, and really the Listener status is what will
> be affected when things are changed.
Total agreement on this.
>
> I'm basically in agreement with Brandon on his points with operational
> status, though I would like to see these broken out into their various
> meanings for the different object types. I also think some object
> types won't need an operational status (eg. L7 Policies,
> healthmonitors, etc.) since these essentially boil down to lines in an
> haproxy configuration file.
Yeah I was thinking could be more descriptive status names for the load
balancer and listener statuses. I was thinking load balancer could have
PENDING_VIP_CREATE/UPDATE/DELETE, but then that'd be painting us into a
corner. More general is needed. With that in mind, the generic
PENDING_CREATE/UPDATE/DELETE is adequate enough as long as the docs
explain what they mean for each object clearly.
>
> Does this make sense?
Indeed.
>
> Stephen
>
>
>
>
> On Fri, Aug 15, 2014 at 3:10 PM, Brandon Logan
> <brandon.logan at rackspace.com> wrote:
> Yeah, need details on that. Maybe he's talking about having
> haproxy
> listen on many ips and ports, each one being a separate front
> end
> section and in the haproxy config with each mapped to its own
> default_backend.
>
> Even if that is the case, the load balancer + listener woudl
> still make
> up one of those frontends so the mapping would still be
> correct.
> Though, maybe a different structure would make more sense if
> that is the
> case.
>
> Also, I've created a WIP review of the initial database
> structure:
> https://review.openstack.org/#/c/114671/
>
> Added my own comments so everyone please look at that.
> Stephen, if you
> could comment on what German mentioned that'd be great.
>
> Have a good weekend!
>
> -Brandon
>
> On Fri, 2014-08-15 at 20:34 +0000, Eichberger, German wrote:
> > --Basically no shareable entities.
> > +1
> >
> > That will make me insanely happy :-)
> >
> > Regarding Listeners: I was assuming that a LoadBalancer
> would map to an haproxy instance - and a listener would be
> part of that haproxy. But I heard Stephen say that this so not
> so clear cut. So maybe listeners map to haproxy instances...
> >
> > German
> >
> > -----Original Message-----
> > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM]
> > Sent: Thursday, August 14, 2014 10:17 PM
> > To: openstack-dev at lists.openstack.org
> > Subject: [openstack-dev] [Octavia] Object Model and DB
> Structure
> >
> > So I've been assuming that the Octavia object model would be
> an exact copy of the neutron lbaas one with additional
> information for Octavia.
> > However, after thinking about it I'm not sure this is the
> right way to go because the object model in neutron lbaas may
> change in the future, and Octavia can't just change it's
> object model when neutron lbaas/openstack lbaas changes it's
> object model. So if there are any lessons learned we would
> like to apply to Octavia's object model now is the time.
> >
> > Entity name changes are also on the table if people don't
> really like some of the names. Even adding new entities or
> removing entities if there are good reasons isn't out of the
> question.
> >
> > Anyway here are a few of my suggestions. Please add on to
> this if you want. Also, just flat out tell me I'm wrong on
> some of htese suggestions if you feel as such.
> >
> > A few improvements I'd suggest (using the current entity
> names):
> > -A real root object that is the only top level object
> (loadbalancer).
> > --This would be 1:M relationship with Listeners, but
> Listeners would only be children of loadbalancers.
> > --Pools, Members, and Health Monitors would follow the same
> workflow.
> > --Basically no shareable entities.
> >
> > -Provisioning status only on the root object (loadbalancer).
> > --PENDING_CREATE, PENDING_UPDATE, PENDING_DELETE, ACTIVE (No
> need for a DEFEERRED status! YAY!) --Also maybe a DELETED
> status.
> >
> > -Operating status on other entities
> > --ACTIVE or ONLINE, DEGRADED, INACTIVE or OFFLINE --Pools
> and Members --Listeners have been mentioned but I'd like to
> hear more details on that.
> >
> > -Adding status_description field in, or something similar.
> Would only eixst on loadbalancer entity if loadbalancer is the
> only top level object.
> >
> > Thanks,
> > Brandon
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list