[openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

Eugene Nikanorov enikanorov at mirantis.com
Wed Mar 26 17:57:46 UTC 2014


Let's discuss it on weekly LBaaS meeting tomorrow.

Thanks,
Eugene.


On Wed, Mar 26, 2014 at 7:03 PM, Susanne Balle <sleipnir012 at gmail.com>wrote:

> Jorge: I agree with you around "ensuring different drivers support the
> API contract" and the no vendor lock-in.
>
> All: How do we move this forward? It sounds like we have agreement that
> this is worth investigating.
>
> How do we move forward with the investigation and how to best architect
> this? Is this a topic for tomorrow's LBaaS weekly meeting? or should I
> schedule a hang-out meeting for us to discuss?
>
> Susanne
>
>
>
>
> On Tue, Mar 25, 2014 at 6:16 PM, Jorge Miramontes <
> jorge.miramontes at rackspace.com> wrote:
>
>>   Hey Susanne,
>>
>>  I think it makes sense to group drivers by each LB software. For
>> example, there would be a driver for HAProxy, one for Citrix's Netscalar,
>> one for Riverbed's Stingray, etc. One important aspect about Openstack that
>> I don't want us to forget though is that a tenant should be able to move
>> between cloud providers at their own will (no vendor lock-in). The API
>> contract is what allows this. The challenging aspect is ensuring different
>> drivers support the API contract in the same way. What components should
>> drivers share is also and interesting conversation to be had.
>>
>>  Cheers,
>> --Jorge
>>
>>   From: Susanne Balle <sleipnir012 at gmail.com>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> <openstack-dev at lists.openstack.org>
>> Date: Tuesday, March 25, 2014 6:59 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org>
>>
>> Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
>> "managed services"
>>
>>   John, Brandon,
>>
>> I agree that we cannot have a multitude of drivers doing the same thing
>> or close to because then we end-up in the same situation as we are today
>> where we have duplicate effort and technical debt.
>>
>>  The goal would be here to be able to built a framework around the
>> drivers that would allow for resiliency, failover, etc...
>>
>>  If the differentiators are in higher level APIs then we can have one a
>> single driver (in the best case) for each software LB e.g. HA proxy, nginx,
>> etc.
>>
>>  Thoughts?
>>
>>  Susanne
>>
>>
>> On Mon, Mar 24, 2014 at 11:26 PM, John Dewey <john at dewey.ws> wrote:
>>
>>> I have a similar concern.  The underlying driver may support different
>>> functionality, but the differentiators need exposed through the top level
>>> API.
>>>
>>>  I see the SSL work is well underway, and I am in the process of
>>> defining L7 scripting requirements.  However, I will definitely need L7
>>> scripting prior to the API being defined.
>>> Is this where vendor extensions come into play?  I kinda like the route
>>> the Ironic guy safe taking with a "vendor passthru" API.
>>>
>>>  John
>>>
>>> On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:
>>>
>>>   Creating a separate driver for every new need brings up a concern I
>>> have had.  If we are to implement a separate driver for every need then the
>>> permutations are endless and may cause a lot drivers and technical debt.
>>>  If someone wants an ha-haproxy driver then great.  What if they want it to
>>> be scalable and/or HA, is there supposed to be scalable-ha-haproxy,
>>> scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing
>>> spinning up processes on the host machine we want a nova VM or a container
>>> to house it?  As you can see the permutations will begin to grow
>>> exponentially.  I'm not sure there is an easy answer for this.  Maybe I'm
>>> worrying too much about it because hopefully most cloud operators will use
>>> the same driver that addresses those basic needs, but worst case scenarios
>>> we have a ton of drivers that do a lot of similar things but are just
>>> different enough to warrant a separate driver.
>>>  ------------------------------
>>> *From:* Susanne Balle [sleipnir012 at gmail.com]
>>> *Sent:* Monday, March 24, 2014 4:59 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra
>>> and "managed services"
>>>
>>>   Eugene,
>>>
>>>  Thanks for your comments,
>>>
>>>  See inline:
>>>
>>>  Susanne
>>>
>>>
>>>  On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov <
>>> enikanorov at mirantis.com> wrote:
>>>
>>>  Hi Susanne,
>>>
>>>  a couple of comments inline:
>>>
>>>
>>>
>>>
>>> We would like to discuss adding the concept of "managed services" to the
>>> Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
>>> proxy. The latter could be a second approach for some of the software
>>> load-balancers e.g. HA proxy since I am not sure that it makes sense to
>>> deploy Libra within Devstack on a single VM.
>>>
>>>
>>>
>>> Currently users would have to deal with HA, resiliency, monitoring and
>>> managing their load-balancers themselves.  As a service provider we are
>>> taking a more managed service approach allowing our customers to consider
>>> the LB as a black box and the service manages the resiliency, HA,
>>> monitoring, etc. for them.
>>>
>>>
>>>
>>>   As far as I understand these two abstracts, you're talking about
>>> making LBaaS API more high-level than it is right now.
>>> I think that was not on our roadmap because another project (Heat) is
>>> taking care of more abstracted service.
>>> The LBaaS goal is to provide vendor-agnostic management of load
>>> balancing capabilities and quite fine-grained level.
>>> Any higher level APIs/tools can be built on top of that, but are out of
>>> LBaaS scope.
>>>
>>>
>>>  [Susanne] Yes. Libra currently has some internal APIs that get
>>> triggered when an action needs to happen. We would like similar
>>> functionality in Neutron LBaaS so the user doesn't have to manage the
>>> load-balancers but can consider them as black-boxes. Would it make sense to
>>> maybe consider integrating Neutron LBaaS with heat to support some of these
>>> use cases?
>>>
>>>
>>>
>>>
>>> We like where Neutron LBaaS is going with regards to L7 policies and SSL
>>> termination support which Libra is not currently supporting and want to
>>> take advantage of the best in each project.
>>>
>>> We have a draft on how we could make Neutron LBaaS take advantage of
>>> Libra in the back-end.
>>>
>>> The details are available at:
>>> https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft
>>>
>>>
>>>  I looked at the proposal briefly, it makes sense to me. Also it seems
>>> to be the simplest way of integrating LBaaS and Libra - create a Libra
>>> driver for LBaaS.
>>>
>>>
>>>  [Susanne] Yes that would be the short team solution to get us where we
>>> need to be. But We do not want to continue to enhance Libra. We would like
>>> move to Neutron LBaaS and not have duplicate efforts.
>>>
>>>
>>>
>>>
>>>  While this would allow us to fill a gap short term we would like to
>>> discuss the longer term strategy since we believe that everybody would
>>> benefit from having such "managed services" artifacts built directly into
>>> Neutron LBaaS.
>>>
>>>  I'm not sure about building it directly into LBaaS, although we can
>>> discuss it.
>>>
>>>
>>>  [Susanne] The idea behind the "managed services" aspect/extensions
>>> would be reusable for other software LB.
>>>
>>>
>>>   For instance, HA is definitely on roadmap and everybody seems to
>>> agree that HA should not require user/tenant to do any specific
>>> configuration other than choosing HA capability of LBaaS service. So as far
>>> as I see it, requirements for HA in LBaaS look very similar to requirements
>>> for HA in Libra.
>>>
>>>
>>>  [Susanne] Yes. Libra works well for us in the public cloud but we
>>> would like to move to Neutron LBaaS and not have duplicate efforts: Libra
>>> and Neutron LBaaS. We were hoping to be able to take the best of Libra and
>>> add it to Neutron LBaaS and help shape Neutron LBaaS to fit a wider
>>> spectrum of customers/users.
>>>
>>>
>>>
>>> There are blueprints on high-availability for the HA proxy software
>>> load-balancer and we would like to suggest implementations that fit our
>>> needs as services providers.
>>>
>>>
>>>
>>> One example where the managed service approach for the HA proxy load
>>> balancer is different from the current Neutron LBaaS roadmap is around HA
>>> and resiliency. The 2 LB HA setup proposed (
>>> https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
>>> appropriate for service providers in that users would have to pay for the
>>> extra load-balancer even though it is not being actively used.
>>>
>>>  One important idea of the HA is that its implementation is
>>> vendor-specific, so each vendor or cloud provider can implement it in the
>>> way that suits their needs. So I don't see why particular HA solution for
>>> haproxy should be considered as a common among other vendors/providers.
>>>
>>>
>>>  [Susanne] Are you saying that we should create a driver that would be
>>> a peer to the current loadbalancer/ ha-proxy driver? So for example
>>>  loadbalancer/managed-ha-proxy (please don't get hung-up on the name I
>>> picked) would be a driver we would implement to get our interaction with a
>>> pool of stand-by load-and preconfigured load balancers instead of the 2 LB
>>> HA servers? And it would be part of the Neutron LBaaS branch?
>>>
>>>
>>>
>>> I am assuming that blueprints need to be approved before the feature is
>>> accepted into a release. Then the feature is implemented and accepted by
>>> the core members into the main repo. What the process would we have to
>>> follow if we wanted to get such a driver into Neutron LBaaS? It is hard to
>>> imagine that even thought it would be a "vendor-specific ha-proxy" driver
>>> that people in the Neutron LBaaS team wouldn't want to have a say
>>> around how it is architected.
>>>
>>>
>>>   An alternative approach is to implement resiliency using a pool of
>>> stand-by load-and preconfigured load balancers own by e.g. LBaaS tenant and
>>> assign load-balancers from the pool to tenants environments. We currently
>>> are using this approach in the public cloud with Libra and it takes
>>> approximately 80 seconds for the service to decide that a load-balancer has
>>> failed, swap the floating ip and update the db, etc. and have a new LB
>>> running.
>>>
>>>
>>>
>>>   That for sure can be implemented. I only would recommend to implement
>>> such kind of management system out of Neutron/LBaaS tree, e.g. to only have
>>> client within Libra driver that will communicate with the management
>>> backend.
>>>
>>>
>>>  [Susanne] Again this would only be a short term solution since as we
>>> move forward and want to contribute new features it would result in
>>> duplication of efforts because the features might need to be done in Libra
>>> and not Neutron LBaaS.
>>>
>>>  In the longer term I would like to discuss how we make Neutron LBaaS
>>> have features that are a little friendlier towards service providers' use
>>> cases. It is very important to us that services like the LBaaS service is
>>> viewed as a managed service e.g. black-box to our customers.
>>>
>>>
>>>
>>>  Thanks,
>>> Eugene.
>>>
>>>
>>>
>>> Regards Susanne
>>>
>>> -------------------------------------------
>>>
>>> Susanne M. Balle
>>> Hewlett-Packard
>>> HP Cloud Services
>>>
>>> Please consider the environment before printing this email.
>>>
>>>
>>>
>>>  _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>     _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140326/96dd0b1d/attachment.html>


More information about the OpenStack-dev mailing list