[openstack-dev] [Neutron][LBaaS] Object Model discussion

Stephen Balukoff sbalukoff at bluebox.net
Thu Feb 20 04:46:58 UTC 2014


Hi guys!

This is a great discussion, and I'm glad y'all have been participating in
it thus far, eh! Thanks also for you patience digesting my mile long posts.

My comments are in-line:


On Wed, Feb 19, 2014 at 3:47 PM, Youcef Laribi <Youcef.Laribi at citrix.com>wrote:

>  Hi guys,
>
>
>
> I have been catching up on this interesting thread around the object
> model, so sorry in advance to jump in late in this debate, and if I missed
> some of the subtleties of the points being made so far.
>
>
>
> I tend to agree with Sam that the original intention of the current object
> model was never tied to a physical deployment. We seem to be confusing the
> tenant-facing object model which is completely logical (albeit with some
> “properties” or “qualities” that a tenant can express) from the
> deployment/implementation aspects of such a logical model (things like
> cluster/HA, one vs. multiple backends, virtual appliance vs. OS process,
> etc). We discussed in the past, the need for an Admin API (separate from
> the tenant API) where a cloud administrator (as opposed to a tenant) could
> manage the deployment aspects, and could construct different offerings that
> can be exposed to a tenant, but in the absence of such as admin API (which
> would necessarily be very technology-specific), this responsibility is
> currently shouldered by the drivers.
>

Looking at the original object model but not having been here for the
origin of these things, I suspect the original intent was to duplicate the
functionality of one major cloud provider's load balancing service and to
keep things as simple as possible. Keeping things as simple as they can be
is of course a desirable goal, but unfortunately the current object model
is too simplistic to support a lot of really desirable features that cloud
tenants are asking for. (Hence the addition of L7 and SSL necessitating a
model change, for example.)

I'm still of the opinion that HA at least should be one of these features--
and although it does speak to topology considerations, it should still be
doable in a purely logical way for the generic case. And I feel pretty
strongly that intelligence around core features (of which I'd say HA
capability is one-- I know of no commercial load balancer solution that
doesn't support HA in some form) should not be delegated solely to drivers.
In addition to intelligence around HA, not having greater visibility into
the components that do the actual load balancing is going to complicate
other features as well--  like auto-provisioning of load balancing
appliances or pseudo-appliances, statistics and monitoring, and scaling.
 And again, the more of these features we delegate to drivers, the more
clients are likely to experience vendor lock-in due to specific driver
implementations being different.

Maybe we should revisit the discussion around the need for an Admin API?
I'm not convinced that all admin API features would be tied to any specific
technology. :/  Most active-standby HA configurations, for example, use
some form of floating IP to achieve this (in fact, I can't think of any
that don't right now). And although specific implementations of how this is
done will vary, a 'floating IP' is a common feature here.


> IMO a tenant should only care about whether VIPs/Pools are grouped
> together to the extent that the provider allows the tenant to express such
> a preference. Some providers will allow their tenants to express such a
> preference (e.g. because it might impact cost), and others might not as it
> wouldn’t make sense in their implementation.
>

Remind me to tell you about the futility of telling a client what he or she
should want sometime. :)

In all seriousness, though, we should come to a decision as to whether we
allow a tenant to make such decisions, and if so, exactly how far we let
them trespass onto operational / implementation concerns. Keep in mind that
what we decide here also directly impacts a tenant's ability to deploy load
balancing on a specific vendor's appliance. (Which, I've been lead to
believe, is a feature some tenants are going to demand.)

I've heard some talk of a concept of 'flavors' which might solve this
problem, but I've not seen enough detail about this to be able to register
an opinion on it. In the absence of a better idea, I'm still plugging for
that whole "cluster + loadbalancer" concept alluded to in my #4.1 diagram
in this e-mail thread. :)


> Also the mapping between pool and backend is not necessarily 1:1, and is
> not necessarily at the creation time of pool, as this is purely a driver
> implementation decision (I know that currently implementations are like
> this, but another driver can choose a different approach). A driver could
> for example delay mapping a pool to a backend, until a full LB
> configuration is completed (when pool has members, and a VIP is attached to
> the pool). A driver can also move these resources around between backends,
> if it finds out, it put them in a non-optimal backend initially. As long as
> the logical model is realized and remains consistent from the tenant point
> of view, implementations should be free to achieve that goal in any way
> they see fit.
>

I agree partially, though there are technological considerations which will
influence this of course. (eg. Sometimes it's going to be really important
that that 'http' and 'https' listener be collocated on the same back-end
because they have to share the same IP address.)

I would also argue that specifying a given HA topology is something tenants
should generally be allowed to do. (Though not necessarily which specific
devices get used, assuming there are a fleet of capable devices deployed.)


>   *From:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>
> *Sent:* Wednesday, February 19, 2014 8:23 AM
> *To:* Samuel Bercovici
> *Cc:* OpenStack Development Mailing List; Mark McClain; Salvatore
> Orlando; sbalukoff at bluebox.net; Youcef Laribi; Avishay Balderman
> *Subject:* Re: [Neutron][LBaaS] Object Model discussion
>
>
>
> Hi Sam,
>
>
>
> My comments inline:
>
>
>
> On Wed, Feb 19, 2014 at 4:57 PM, Samuel Bercovici <SamuelB at radware.com>
> wrote:
>
> Hi,
>
>
>
> I think we mix different aspects of operations. And try to solve a non
> “problem”.
>
> Not really, Advanced features we're trying to introduce are incompatible
> by both object model and API.
>

+1


>
>  From APIs/Operations we are mixing the following models:
>
> 1.       Logical model (which as far as I understand is the topic of this
> discussion) - tenants define what they need logically vipàdefault_pool,
> l7 association, ssl, etc.
>
>  That's correct. Tenant may or may not care about how it is grouped on
> the backend. We need to support both cases.
>

+1

Also, I would argue that specific HA topologies (though not necessarily the
devices they map to) are still part of that 'logical model'.

>    2.       Physical model – operator / vendor install and specify how
> backend gets implemented.
>
> I talked about the benefits of the cloud from the perspective of the
operator / vendor in a previous post. In any case, to realize many of these
benefits the cloud needs to be aware of physical components, though
specific intelligence about these will generally be hidden from tenants.

(Maybe I should have been talking about the need for a "cloud
administrator's" API or interface?)


>  3.       Deploying 1 on 2 – this is currently the driver’s
> responsibility. We can consider making it better but this should not impact
> the logical model.
>
>  I think grouping vips and pools is important part of logical model, even
> if some users may not care about it.
>

+1

Just because some users don't care doesn't mean all users won't care.
Again, in my experience, user's don't care until they do (ie. a specific
feature becomes important to their business needs that was previously
unimportant). So, I'm generally in favor of allowing for the simplest
workflow possible for users (and following the principle of least surprise
when picking defaults on their behalf), but doing this in a model which
also supports advanced features without having to resort to annoying hacks
(like delegating major features to drivers. :P )


>
>
>  I think this is not a “problem”.
>
> In a logical model a pool which is part of L7 policy is a logical object
> which could be placed at any backend and any existing vipßàpool and
> accordingly configure the backend that those vipßàpool are deployed on.
>
>   That's not how it currently works - that's why we're trying to address
> it. Having pool shareable between backends at least requires to move
> 'instance' role from the pool to some other entity, and also that changes a
> number of API aspects.
>

>
>  If the same pool that was part of a l7 association will also be
> connected to a vip as a default pool, than by all means this new vipßàpool
> pair can be instantiated into some back end.
>
> The proposal to not allow this (ex: only allow pools that are connected to
> the same lb-instance to be used for l7 association), brings the physical
> model into the logical model.
>
>  So proposal tries to address 2 issues:
>
> 1) in many cases it is desirable to know about grouping of logical objects
> on the backend
>

...and occasionally dictate that grouping from the API.


>   2) currently physical model implied when working with pools, because
> pool is the root and corresponds to backend with 1:1 mapping
>
>
>
>
>
> I think that the current logical model is fine with the exception that the
> two way reference between vip and pool (vipßàpool) should be modified
> with only vip pointing to a pool (vipàpool) which allows reusing the pool
> with multiple vips.
>
>  Reusing pools by vips is not as simple as it seems.
>
> If those vips belong to 1 backend (that by itself requires tenant to know
> about that) - that's no problem, but if they don't, then:
>
> 1) what 'status' attribute of the pool would mean?
>

Presumably if the pool is down for one back-end, it's down for all the
back-ends. But you're right that there might be cases (eg. network
connectivity problem) where a pool is down for one back-end but not
another.  Maybe the question of pool status only makes sense in the context
of a specific listener?



>   2) how health monitors for the pool will be deployed? and what their
> statuses would mean?
>

To be frank, I don't understand why healthmonitors aren't just extra
attributes of the pool object. (Does anyone know of any case where having
multiple healthmonitors per pool makes sense?) Also, isn't asking the
status of a healthmonitor functionally equivalent to asking the status of
the pool?


>   3) what pool statistics would mean?
>

Aah, this is trickier. Again, maybe this only makes sense in the context of
a specific listener, as well?  But yes, the problem of gathering aggregate
statistics for a single pool spread across many back-ends becomes a lot
more complicated.


>   4) If the same pool is used on
>
>
>
> To be able to preserve existing meaningful healthmonitors, members and
> statistics API we will need to create associations for everything, or just
> change API in backward incompatible way.
>
> My opinion is that it make sense to limit such ability (reusing pools by
> vips deployed on different backends) in favor of simpler code, IMO it's
> really a big deal. Pool is lightweight enough to not to share it as an
> object.
>

Agreed--  not having re-usable pools just means the tenant needs to do more
housekeeping in a larger environment.

Honestly...  even among our largest clients, we rarely see them re-using
pools, or having pools associated with multiple listeners. So, this might
be a moot point for the 90% use case anyway.

Thanks,
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140219/6522921a/attachment.html>


More information about the OpenStack-dev mailing list