[openstack-dev] OpenStack-dev Digest, Vol 24, Issue 56
ms sh
mssh2001 at live.com
Fri Apr 18 11:26:56 UTC 2014
发自我的 iPad
在 2014-4-18,上午10:35,openstack-dev-request at lists.openstack.org 写道:
> Send OpenStack-dev mailing list submissions to
> openstack-dev at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> or, via email, send a message with subject or body 'help' to
> openstack-dev-request at lists.openstack.org
>
> You can reach the person managing the list at
> openstack-dev-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-dev digest..."
>
>
> Today's Topics:
>
> 1. oslo removal of use_tpool conf option (Chris Behrens)
> 2. Re: [Neutron][LBaas] "Single call" API discussion (Brandon Logan)
> 3. Re: [Neutron][LBaas] "Single call" API discussion (Carlos Garza)
> 4. [Neutron][LBaaS] HA functionality discussion (Stephen Balukoff)
> 5. Re: oslo removal of use_tpool conf option (Michael Still)
> 6. Re: oslo removal of use_tpool conf option (Joshua Harlow)
> 7. Re: [neutron] [group-based-policy] Moving the meeting time
> (Sumit Naiksatam)
> 8. Re: [heat][mistral] Mistral agenda item for Heat community
> meeting on Apr 17 (Zane Bitter)
> 9. Re: oslo removal of use_tpool conf option (Chris Behrens)
> 10. Re: [Neutron][LBaaS] Requirements and API revision progress
> (Stephen Balukoff)
> 11. Re: [heat] [heat-templates] [qa] [tempest] Questions about
> images (Mike Spreitzer)
> 12. Re: [Neutron][LBaaS] HA functionality discussion (Susanne Balle)
> 13. Re: oslo removal of use_tpool conf option (Davanum Srinivas)
> 14. Re: [Neutron][LBaaS] HA functionality discussion (Carlos Garza)
> 15. Re: [Neutron][LBaaS] Requirements and API revision progress
> (Brandon Logan)
> 16. Re: [Neutron][LBaas] "Single call" API discussion
> (Stephen Balukoff)
> 17. Re: oslo removal of use_tpool conf option (Joshua Harlow)
> 18. [QA] [Tempest] Spreadsheet for Nova API tests (Kenichi Oomichi)
> 19. Re: [Neutron][LBaaS] Requirements and API revision progress
> (Stephen Balukoff)
> 20. Re: [Neutron][LBaas] "Single call" API discussion (Brandon Logan)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 17 Apr 2014 15:20:36 -0700
> From: Chris Behrens <cbehrens at codestud.com>
> To: OpenStack Development Mailing List
> <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] oslo removal of use_tpool conf option
> Message-ID: <B5B1E92B-4B67-4A32-952D-3ECF5D243F0B at codestud.com>
> Content-Type: text/plain; charset="windows-1252"
>
>
> I?m going to try to not lose my cool here, but I?m extremely upset by this.
>
> In December, oslo apparently removed the code for ?use_tpool? which allows you to run DB calls in Threads because it was ?eventlet specific?. I noticed this when a review was posted to nova to add the option within nova itself:
>
> https://review.openstack.org/#/c/59760/
>
> I objected to this and asked (more demanded) for this to be added back into oslo. It was not. What I did not realize when I was reviewing this nova patch, was that nova had already synced oslo?s change. And now we?ve released Icehouse with a conf option missing that existed in Havana. Whatever projects were using oslo?s DB API code has had this option disappear (unless an alternative was merged). Maybe it?s only nova.. I don?t know.
>
> Some sort of process broke down here. nova uses oslo. And oslo removed something nova uses without deprecating or merging an alternative into nova first. How I believe this should have worked:
>
> 1) All projects using oslo?s DB API code should have merged an alternative first.
> 2) Remove code from oslo.
> 3) Then sync oslo.
>
> What do we do now? I guess we?ll have to back port the removed code into nova. I don?t know about other projects.
>
> NOTE: Very few people are probably using this, because it doesn?t work without a patched eventlet. However, Rackspace happens to be one that does. And anyone waiting on a new eventlet to be released such that they could use this with Icehouse is currently out of luck.
>
> - Chris
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/895b6691/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 17 Apr 2014 17:46:34 -0500
> From: Brandon Logan <brandon.logan at rackspace.com>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaas] "Single call" API
> discussion
> Message-ID: <535059CA.8060503 at rackspace.com>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Stephen,
> I have responded to your questions below.
>
> On 04/17/2014 01:02 PM, Stephen Balukoff wrote:
>> Howdy folks!
>>
>> Based on this morning's IRC meeting, it seems to me there's some
>> contention and confusion over the need for "single call" functionality
>> for load balanced services in the new API being discussed. This is
>> what I understand:
>>
>> * Those advocating "single call" are arguing that this simplifies the
>> API for users, and that it more closely reflects the users' experience
>> with other load balancing products. They don't want to see this
>> functionality necessarily delegated to an orchestration layer (Heat),
>> because coordinating how this works across two OpenStack projects is
>> unlikely to see success (ie. it's hard enough making progress with
>> just one project). I get the impression that people advocating for
>> this feel that their current users would not likely make the leap to
>> Neutron LBaaS unless some kind of functionality or workflow is
>> preserved that is no more complicated than what they currently have to do.
> Another reason, which I've mentioned many times before and keeps getting
> ignored, is because the more primitives you add the longer it will take
> to provision a load balancer. Even if we relied on the orchestration
> layer to build out all the primitives, it still will take much more time
> to provision a load balancer than a single create call provided by the
> API. Each request and response has an inherent time to process. Many
> primitives will also have an inherent build time. Combine this in an
> environment that becomes more and more dense, build times will become
> very unfriendly to end users whether they are using the API directly,
> going through a UI, or going through an orchestration layer. This
> industry is always trying to improve build/provisioning times and there
> are no reasons why we shouldn't try to achieve the same goal.
>>
>> * Those (mostly) against the idea are interested in seeing the API
>> provide primitives and delegating "higher level" single-call stuff to
>> Heat or some other orchestration layer. There was also the implication
>> that if "single-call" is supported, it ought to support both simple
>> and advanced set-ups in that single call. Further, I sense concern
>> that if there are multiple ways to accomplish the same thing supported
>> in the API, this redundancy breeds complication as more features are
>> added, and in developing test coverage. And existing Neutron APIs tend
>> to expose only primitives. I get the impression that people against
>> the idea could be convinced if more compelling reasons were
>> illustrated for supporting single-call, perhaps other than "we don't
>> want to change the way it's done in our environment right now."
> I completely disagree with "we dont want to change the way it's done in
> our environment right now". Our proposal has changed the way our
> current API works right now. We do not have the notion of primitives in
> our current API and our proposal included the ability to construct a
> load balancer with primitives individually. We kept that in so that
> those operators and users who do like constructing a load balancer that
> way can continue doing so. What we are asking for is to keep our users
> happy when we do deploy this in a production environment and maintain a
> single create load balancer API call.
>>
>> I've mostly stayed out of this debate because our solution as used by
>> our customers presently isn't "single-call" and I don't really
>> understand the requirements around this.
>>
>> So! I would love it if some of you could fill me in on this,
>> especially since I'm working on a revision of the proposed API.
>> Specifically, what I'm looking for is answers to the following questions:
>>
>> 1. Could you please explain what you understand single-call API
>> functionality to be?
> Single-call API functionality is a call that supports adding multiple
> features to an entity (load balancer in this case) in one API request.
> Whether this supports all features of a load balancer or a subset is up
> for debate. I prefer all features to be supported. Yes it adds
> complexity, but complexity is always introduced by improving the end
> user experience and I hope a good user experience is a goal.
>>
>> 2. Could you describe the simplest use case that uses single-call API
>> in your environment right now? Please be very specific-- ideally, a
>> couple examples of specific CLI commands a user might run, or API
>> (along with specific configuration data) would be great.
> http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Create_Load_Balancer-d1e1635.html
>
> This page has many different ways to configure a load balancer with one
> call. It ranges from a simple load balancer to a load balancer with a
> much more complicated configuration. Generally, if any of those
> features are allowed on a load balancer then it is supported through the
> single call.
>>
>> 3. Could you describe the most complicated use case that your
>> single-call API supports? Again, please be very specific here.
> Same data can be derived from the link above.
>>
>> 4. What percentage of your customer base are used to using single-call
>> functionality, and what percentage are used to manipulating primitives?
> 100% but just like others it is the only way to create a load balancer
> in our API. So this data doesn't mean much.
>
> Oh! One other question:
>
> 5. Should "single-call" stuff work for the lifecycle of a load
> balancing service? That is to say, should "delete" functionality also
> clean up all primitives associated with the service?
>
> How we were thinking was that it would just "detach" the primitives from
> the load balancer but keep them available for association with another
> load balancer. A user would only be able to actually delete a primitive
> if it went through the root primitive resource (i.e. /pools, /vips).
> However, this is definitely up for debate and there are pros and cons to
> doing it both ways. If the system completely deletes the primitives on
> the deletion of the load balancer, then the system has to handle when
> one of those primitives is being shared with another load balancer.
>
>>
>> Thanks!
>> Stephen
>>
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/73772b91/attachment-0001.html>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 17 Apr 2014 22:46:47 +0000
> From: Carlos Garza <carlos.garza at rackspace.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaas] "Single call" API
> discussion
> Message-ID: <4F3BBF78-CE8B-4F24-BCEB-2A90BE4C3C5E at rackspace.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
> On Apr 17, 2014, at 2:11 PM, Stephen Balukoff <sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>>
> wrote:
>
> Oh! One other question:
>
> 5. Should "single-call" stuff work for the lifecycle of a load balancing service? That is to say, should "delete" functionality also clean up all primitives associated with the service?
>
>
> We were advocating leaving the primitives behind for the user to delete out of respect for shared objects.
> The proposal mentions this too.
>
>
> On Thu, Apr 17, 2014 at 11:44 AM, Stephen Balukoff <sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>> wrote:
> Hi Sri,
>
> Yes, the meeting minutes & etc. are all available here, usually a few minutes after the meeting is over: http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/
>
> (You are also, of course, welcome to join!)
>
> Stephen
>
>
> On Thu, Apr 17, 2014 at 11:34 AM, Sri <sri.networkid at gmail.com<mailto:sri.networkid at gmail.com>> wrote:
> hello Stephen,
>
>
> I am interested in LBaaS and want to know if we post the weekly meeting's
> chat transcripts online?
> or may be update an etherpad?
>
>
> Can you please share the links?
>
> thanks,
> SriD
>
>
>
> --
> View this message in context: http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38542.html
> Sent from the Developer mailing list archive at Nabble.com<http://Nabble.com>.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807<tel:%28800%29613-4305%20x807>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/6927889b/attachment-0001.html>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 17 Apr 2014 15:49:07 -0700
> From: Stephen Balukoff <sbalukoff at bluebox.net>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Neutron][LBaaS] HA functionality discussion
> Message-ID:
> <CAAGw+Zrmnv8_74K6KHGa48VU_TJHtETKVFh1khDxni0wpVqdxg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Heyas, y'all!
>
> So, given both the prioritization and usage info on HA functionality for
> Neutron LBaaS here:
> https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc&usp=sharing
>
> It's clear that:
>
> A. HA seems to be a top priority for most operators
> B. Almost all load balancer functionality deployed is done so in an
> Active/Standby HA configuration
>
> I know there's been some round-about discussion about this on the list in
> the past (which usually got stymied in "implementation details"
> disagreements), but it seems to me that with so many players putting a high
> priority on HA functionality, this is something we need to discuss and
> address.
>
> This is also apropos, as we're talking about doing a major revision of the
> API, and it probably makes sense to seriously consider if or how HA-related
> stuff should make it into the API. I'm of the opinion that almost all the
> HA stuff should be hidden from the user/tenant, but that the admin/operator
> at the very least is going to need to have some visibility into HA-related
> functionality. The hope here is to discover what things make sense to have
> as a "least common denominator" and what will have to be hidden behind a
> driver-specific implementation.
>
> I certainly have a pretty good idea how HA stuff works at our organization,
> but I have almost no visibility into how this is done elsewhere, leastwise
> not enough detail to know what makes sense to write API controls for.
>
> So! Since gathering data about actual usage seems to have worked pretty
> well before, I'd like to try that again. Yes, I'm going to be asking about
> implementation details, but this is with the hope of discovering any "least
> common denominator" factors which make sense to build API around.
>
> For the purposes of this document, when I say "load balancer devices" I
> mean either physical or virtual appliances, or software executing on a host
> somewhere that actually does the load balancing. It need not directly
> correspond with anything physical... but probably does. :P
>
> And... all of these questions are meant to be interpreted from the
> perspective of the cloud operator.
>
> Here's what I'm looking to learn from those of you who are allowed to share
> this data:
>
> 1. Are your load balancer devices shared between customers / tenants, not
> shared, or some of both?
>
> 1a. If shared, what is your strategy to avoid or deal with collisions of
> customer rfc1918 address space on back-end networks? (For example, I know
> of no load balancer device that can balance traffic for both customer A and
> customer B if both are using the 10.0.0.0/24 subnet for their back-end
> networks containing the nodes to be balanced, unless an extra layer of
> NATing is happening somewhere.)
>
> 2. What kinds of metrics do you use in determining load balancing capacity?
>
> 3. Do you operate with a pool of unused load balancer device capacity
> (which a cloud OS would need to keep track of), or do you spin up new
> capacity (in the form of virtual servers, presumably) on the fly?
>
> 3a. If you're operating with a availability pool, can you describe how new
> load balancer devices are added to your availability pool? Specifically,
> are there any steps in the process that must be manually performed (ie. so
> no API could help with this)?
>
> 4. How are new devices 'registered' with the cloud OS? How are they removed
> or replaced?
>
> 5. What kind of visibility do you (or would you) allow your user base to
> see into the HA-related aspects of your load balancing services?
>
> 6. What kind of functionality and visibility do you need into the
> operations of your load balancer devices in order to maintain your
> services, troubleshoot, etc.? Specifically, are you managing the
> infrastructure outside the purview of the cloud OS? Are there certain
> aspects which would be easier to manage if done within the purview of the
> cloud OS?
>
> 7. What kind of network topology is used when deploying load balancing
> functionality? (ie. do your load balancer devices live inside or outside
> customer firewalls, directly on tenant networks? Are you using layer-3
> routing? etc.)
>
> 8. Is there any other data you can share which would be useful in
> considering features of the API that only cloud operators would be able to
> perform?
>
>
> And since we're one of these operators, here are my responses:
>
> 1. We have both shared load balancer devices and private load balancer
> devices.
>
> 1a. Our shared load balancers live outside customer firewalls, and we use
> IPv6 to reach individual servers behind the firewalls "directly." We have
> followed a careful deployment strategy across all our networks so that IPv6
> addresses between tenants do not overlap.
>
> 2. The most useful ones for us are "number of appliances deployed" and
> "number and type of load balancing services deployed" though we also pay
> attention to:
> * Load average per "active" appliance
> * Per appliance number and type of load balancing services deployed
> * Per appliance bandwidth consumption
> * Per appliance connections / sec
> * Per appliance SSL connections / sec
>
> Since our devices are software appliances running on linux we also track
> OS-level metrics as well, though these aren't used directly in the load
> balancing features in our cloud OS.
>
> 3. We operate with an availability pool that our current cloud OS pays
> attention to.
>
> 3a. Since the devices we use correspond to physical hardware this must of
> course be rack-and-stacked by a datacenter technician, who also does
> initial configuration of these devices.
>
> 4. All of our load balancers are deployed in an active / standby
> configuration. Two machines which make up an active / standby pair are
> registered with the cloud OS as a single unit that we call a "load balancer
> cluster." Our availability pool consists of a whole bunch of these load
> balancer clusters. (The devices themselves are registered individually at
> the time the cluster object is created in our database.) There are a couple
> manual steps in this process (currently handled by the datacenter techs who
> do the racking and stacking), but these could be automated via API. In
> fact, as we move to virtual appliances with these, we expect the entire
> process to become automated via API (first cluster primitive is created,
> and then "load balancer device objects" get attached to it, then the
> cluster gets added to our availability pool.)
>
> Removal of a "cluster" object is handled by first evacuating any customer
> services off the cluster, then destroying the load balancer device objects,
> then the cluster object. Replacement of a single load balancer device
> entails removing the dead device, adding the new one, synchronizing
> configuration data to it, and starting services.
>
> 5. At the present time, all our load balancing services are deployed in an
> active / standby HA configuration, so the user has no choice or visibility
> into any HA details. As we move to Neutron LBaaS, we would like to give
> users the option of deploying non-HA load balancing capacity. Therefore,
> the only visibility we want the user to get is:
>
> * Choose whether a given load balancing service should be deployed in an HA
> configuration ("flavor" functionality could handle this)
> * See whether a running load balancing service is deployed in an HA
> configuration (and see the "hint" for which physical or virtual device(s)
> it's deployed on)
> * Give a "hint" as to which device(s) a new load balancing service should
> be deployed on (ie. for customers looking to deploy a bunch of test / QA /
> etc. environments on the same device(s) to reduce costs).
>
> Note that the "hint" above corresponds to the "load balancing cluster"
> alluded to above, not necessarily any specific physical or virtual device.
> This means we retain the ability to switch out the underlying hardware
> powering a given service at any time.
>
> Users may also see usage data, of course, but that's more of a generic
> stats / billing function (which doesn't have to do with HA at all, really).
>
> 6. We need to see the status of all our load balancing devices, including
> availability, current role (active or standby), and all the metrics listed
> under 2 above. Some of this data is used for creating trend graphs and
> business metrics, so being able to query the current metrics at any time
> via API is important. It would also be very handy to query specific device
> info (like revision of software on it, etc.) Our current cloud OS does all
> this for us, and having Neutron LBaaS provide visibility into all of this
> as well would be ideal. We do almost no management of our load balancing
> services outside the purview of our current cloud OS.
>
> 7. Shared load balancers must live outside customer firewalls, private load
> balancers typically live within customer firewalls (sometimes in a DMZ). In
> any case, we use layer-3 routing (distributed using routing protocols on
> our core networking gear and static routes on customer firewalls) to route
> requests for "service IPs" to the "highly available routing IPs" which live
> on the load balancers themselves. (When a fail-over happens, at a low
> level, what's really going on is the "highly available routing IPs" shift
> from the active to standby load balancer.)
>
> We have contemplated using layer-2 topology (ie. directly connected on the
> same vlan / broadcast domain) and are building a version of our appliance
> which can operate in this way, potentially reducing the reliance on layer-3
> routes (and making things more friendly for the OpenStack environment,
> which we understand probably isn't ready for layer-3 routing just yet).
>
> 8. I wrote this survey, so none come to mind for me. :)
>
> Stephen
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/ff709b4c/attachment-0001.html>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 18 Apr 2014 09:13:55 +1000
> From: Michael Still <mikal at stillhq.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] oslo removal of use_tpool conf option
> Message-ID:
> <CAEd1pt5OKzRLt0mtJ5H5oENTJwJ1CX5otnzZrW6zUOVbHKScAQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> It looks to me like this was removed in oslo in commit
> a33989e7a2737af757648099cc1af6c642b6e016, which was synced with nova
> in 605749ca12af969ac122008b4fa14904df68caf7 (however, I can't see the
> change being listed in the commit message for nova, which I assume is
> a process failure). That change merged into nova on March 6.
>
> I think the only option we're left with for icehouse is a backport fix for this.
>
> Michael
>
> On Fri, Apr 18, 2014 at 8:20 AM, Chris Behrens <cbehrens at codestud.com> wrote:
>>
>> I?m going to try to not lose my cool here, but I?m extremely upset by this.
>>
>> In December, oslo apparently removed the code for ?use_tpool? which allows
>> you to run DB calls in Threads because it was ?eventlet specific?. I noticed
>> this when a review was posted to nova to add the option within nova itself:
>>
>> https://review.openstack.org/#/c/59760/
>>
>> I objected to this and asked (more demanded) for this to be added back into
>> oslo. It was not. What I did not realize when I was reviewing this nova
>> patch, was that nova had already synced oslo?s change. And now we?ve
>> released Icehouse with a conf option missing that existed in Havana.
>> Whatever projects were using oslo?s DB API code has had this option
>> disappear (unless an alternative was merged). Maybe it?s only nova.. I don?t
>> know.
>>
>> Some sort of process broke down here. nova uses oslo. And oslo removed
>> something nova uses without deprecating or merging an alternative into nova
>> first. How I believe this should have worked:
>>
>> 1) All projects using oslo?s DB API code should have merged an alternative
>> first.
>> 2) Remove code from oslo.
>> 3) Then sync oslo.
>>
>> What do we do now? I guess we?ll have to back port the removed code into
>> nova. I don?t know about other projects.
>>
>> NOTE: Very few people are probably using this, because it doesn?t work
>> without a patched eventlet. However, Rackspace happens to be one that does.
>> And anyone waiting on a new eventlet to be released such that they could use
>> this with Icehouse is currently out of luck.
>>
>> - Chris
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Australia
>
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 17 Apr 2014 23:26:16 +0000
> From: Joshua Harlow <harlowja at yahoo-inc.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>, Chris Behrens
> <cbehrens at codestud.com>
> Subject: Re: [openstack-dev] oslo removal of use_tpool conf option
> Message-ID: <CF75AEA6.5C808%harlowja at yahoo-inc.com>
> Content-Type: text/plain; charset="windows-1252"
>
> Just an honest question (no negativity intended I swear!).
>
> If a configuration option exists and only works with a patched eventlet why is that option an option to begin with? (I understand the reason for the patch, don't get me wrong).
>
> Most users would not be able to use such a configuration since they do not have this patched eventlet (I assume a newer version of eventlet someday in the future will have this patch integrated in it?) so although I understand the frustration around this I don't understand why it would be an option in the first place. An aside, if the only way to use this option is via a non-standard eventlet then how is this option tested in the community, aka outside of said company?
>
> An example:
>
> If yahoo has some patched kernel A that requires an XYZ config turned on in openstack and the only way to take advantage of kernel A is with XYZ config 'on', then it seems like that?s a yahoo only patch that is not testable and useable for others, even if patched kernel A is somewhere on github it's still imho not something that should be a option in the community (anyone can throw stuff up on github and then say I need XYZ config to use it).
>
> To me non-standard patches that require XYZ config in openstack shouldn't be part of the standard openstack, no matter the company. If patch A is in the mainline kernel (or other mainline library), then sure it's fair game.
>
> -Josh
>
> From: Chris Behrens <cbehrens at codestud.com<mailto:cbehrens at codestud.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Date: Thursday, April 17, 2014 at 3:20 PM
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: [openstack-dev] oslo removal of use_tpool conf option
>
>
> I?m going to try to not lose my cool here, but I?m extremely upset by this.
>
> In December, oslo apparently removed the code for ?use_tpool? which allows you to run DB calls in Threads because it was ?eventlet specific?. I noticed this when a review was posted to nova to add the option within nova itself:
>
> https://review.openstack.org/#/c/59760/
>
> I objected to this and asked (more demanded) for this to be added back into oslo. It was not. What I did not realize when I was reviewing this nova patch, was that nova had already synced oslo?s change. And now we?ve released Icehouse with a conf option missing that existed in Havana. Whatever projects were using oslo?s DB API code has had this option disappear (unless an alternative was merged). Maybe it?s only nova.. I don?t know.
>
> Some sort of process broke down here. nova uses oslo. And oslo removed something nova uses without deprecating or merging an alternative into nova first. How I believe this should have worked:
>
> 1) All projects using oslo?s DB API code should have merged an alternative first.
> 2) Remove code from oslo.
> 3) Then sync oslo.
>
> What do we do now? I guess we?ll have to back port the removed code into nova. I don?t know about other projects.
>
> NOTE: Very few people are probably using this, because it doesn?t work without a patched eventlet. However, Rackspace happens to be one that does. And anyone waiting on a new eventlet to be released such that they could use this with Icehouse is currently out of luck.
>
> - Chris
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/19fdd177/attachment-0001.html>
>
> ------------------------------
>
> Message: 7
> Date: Thu, 17 Apr 2014 16:30:28 -0700
> From: Sumit Naiksatam <sumitnaiksatam at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] [group-based-policy] Moving the
> meeting time
> Message-ID:
> <CAMWrLvhHBs0EMo4oVfnWV-qxR4f7MnqDa5VCy9zVboM_ni4a_Q at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> We realized it the hard way that we didn't have the entire one hour
> slot on -meeting-alt at 17:30 on Thursdays.
>
> So, we have to move this meeting again. Based on the opinion of those
> present in today's meeting, there was a consensus for the following
> time:
> Thursdays at 1800 UTC on #openstack-meeting-3
>
> See you there next week.
>
> Thanks,
> ~Sumit.
>
> On Mon, Apr 14, 2014 at 3:09 PM, Kyle Mestery <mestery at noironetworks.com> wrote:
>> Stephen:
>>
>> 1730 UTC is available. I've moved the meeting to that time, so
>> starting this week the meeting will be at the new time.
>>
>> https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting
>>
>> Thanks!
>> Kyle
>>
>>
>> On Thu, Apr 10, 2014 at 3:23 PM, Stephen Wong <s3wong at midokura.com> wrote:
>>> Hi Kyle,
>>>
>>> Is 1730UTC available on that channel? If so, and if it is OK with
>>> everyone, it would be great to have it at 1730 UTC instead (10:30am PDT /
>>> 1:30pm EDT, which would also be at the same time on a different day of the
>>> week as the advanced service meeting).
>>>
>>> Thanks,
>>> - Stephen
>>>
>>>
>>>
>>> On Thu, Apr 10, 2014 at 11:10 AM, Kyle Mestery <mestery at noironetworks.com>
>>> wrote:
>>>>
>>>> Per our meeting last week, I'd like to propose moving the weekly
>>>> Neutron GBP meeting to 1800UTC (11AM PDT / 2PM EDT) on Thursdays in
>>>> #openstack-meeting-3. If you're not ok with this timeslot, please
>>>> reply on this thread. If I don't hear any dissenters, I'll officially
>>>> move the meeting on the wiki and reply here in a few days.
>>>>
>>>> Thanks!
>>>> Kyle
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 17 Apr 2014 19:44:12 -0400
> From: Zane Bitter <zbitter at redhat.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [heat][mistral] Mistral agenda item for
> Heat community meeting on Apr 17
> Message-ID: <5350674C.5040702 at redhat.com>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
> On 17/04/14 00:34, Renat Akhmerov wrote:
>> Ooh, I confused the day of meeting :(. My apologies, I?m in a completely different timezone (for me it?s in the middle of night) so I strongly believed it was on a different day. I?ll be there next time.
>
> Yeah, it's really unfortunate that it falls right at midnight UTC,
> because it makes the dates really confusing :/ It's technically correct
> though, so it's hard to know what to do to make it less confusing.
>
> We ended up pretty short on time anyway, so it worked out well ;) I just
> added it to the agenda for next week, and hopefully that meeting time
> should be marginally more convenient for you anyway.
>
> cheers,
> Zane.
>
>
>
> ------------------------------
>
> Message: 9
> Date: Thu, 17 Apr 2014 16:59:39 -0700
> From: Chris Behrens <cbehrens at codestud.com>
> To: Joshua Harlow <harlowja at yahoo-inc.com>
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] oslo removal of use_tpool conf option
> Message-ID: <F5EDD13D-46CF-4DFD-8F54-1D7313E5E902 at codestud.com>
> Content-Type: text/plain; charset="windows-1252"
>
>
> On Apr 17, 2014, at 4:26 PM, Joshua Harlow <harlowja at yahoo-inc.com> wrote:
>
>> Just an honest question (no negativity intended I swear!).
>>
>> If a configuration option exists and only works with a patched eventlet why is that option an option to begin with? (I understand the reason for the patch, don't get me wrong).
>
> Right, it?s a valid question. This feature has existed one way or another in nova for quite a while. Initially the implementation in nova was wrong. I did not know that eventlet was also broken at the time, although I discovered it in the process of fixing nova?s code. I chose to leave the feature because it?s something that we absolutely need long term, unless you really want to live with DB calls blocking the whole process. I know I don?t. Unfortunately the bug in eventlet is out of our control. (I made an attempt at fixing it, but it?s not 100%. Eventlet folks currently have an alternative up that may or may not work? but certainly is not in a release yet.) We have an outstanding bug on our side to track this, also.
>
> The below is comparing apples/oranges for me.
>
> - Chris
>
>
>> Most users would not be able to use such a configuration since they do not have this patched eventlet (I assume a newer version of eventlet someday in the future will have this patch integrated in it?) so although I understand the frustration around this I don't understand why it would be an option in the first place. An aside, if the only way to use this option is via a non-standard eventlet then how is this option tested in the community, aka outside of said company?
>>
>> An example:
>>
>> If yahoo has some patched kernel A that requires an XYZ config turned on in openstack and the only way to take advantage of kernel A is with XYZ config 'on', then it seems like that?s a yahoo only patch that is not testable and useable for others, even if patched kernel A is somewhere on github it's still imho not something that should be a option in the community (anyone can throw stuff up on github and then say I need XYZ config to use it).
>>
>> To me non-standard patches that require XYZ config in openstack shouldn't be part of the standard openstack, no matter the company. If patch A is in the mainline kernel (or other mainline library), then sure it's fair game.
>>
>> -Josh
>>
>> From: Chris Behrens <cbehrens at codestud.com>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
>> Date: Thursday, April 17, 2014 at 3:20 PM
>> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
>> Subject: [openstack-dev] oslo removal of use_tpool conf option
>>
>>>
>>> I?m going to try to not lose my cool here, but I?m extremely upset by this.
>>>
>>> In December, oslo apparently removed the code for ?use_tpool? which allows you to run DB calls in Threads because it was ?eventlet specific?. I noticed this when a review was posted to nova to add the option within nova itself:
>>>
>>> https://review.openstack.org/#/c/59760/
>>>
>>> I objected to this and asked (more demanded) for this to be added back into oslo. It was not. What I did not realize when I was reviewing this nova patch, was that nova had already synced oslo?s change. And now we?ve released Icehouse with a conf option missing that existed in Havana. Whatever projects were using oslo?s DB API code has had this option disappear (unless an alternative was merged). Maybe it?s only nova.. I don?t know.
>>>
>>> Some sort of process broke down here. nova uses oslo. And oslo removed something nova uses without deprecating or merging an alternative into nova first. How I believe this should have worked:
>>>
>>> 1) All projects using oslo?s DB API code should have merged an alternative first.
>>> 2) Remove code from oslo.
>>> 3) Then sync oslo.
>>>
>>> What do we do now? I guess we?ll have to back port the removed code into nova. I don?t know about other projects.
>>>
>>> NOTE: Very few people are probably using this, because it doesn?t work without a patched eventlet. However, Rackspace happens to be one that does. And anyone waiting on a new eventlet to be released such that they could use this with Icehouse is currently out of luck.
>>>
>>> - Chris
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/cfd6d548/attachment-0001.html>
>
> ------------------------------
>
> Message: 10
> Date: Thu, 17 Apr 2014 17:03:31 -0700
> From: Stephen Balukoff <sbalukoff at bluebox.net>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements and API
> revision progress
> Message-ID:
> <CAAGw+ZqFe1MSub3j7kXHyPkrpM6pKRJotv96kELPOPZZ7sPGPw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Brandon!
>
> Per the meeting this morning, I seem to recall you were looking to have me
> elaborate on why the term 'load balancer' as used in your API proposal is
> significantly different from the term 'load balancer' as used in the
> glossary at: https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary
>
> As promised, here's my elaboration on that:
>
> The glossary above states: "An object that represent a logical load
> balancer that may have multiple resources such as Vips, Pools,
> Members, etc.Loadbalancer
> is a root object in the meaning described above." and references the
> diagram here:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#Loadbalancer_instance_solution
>
> On that diagram, it's clear that VIPs, & etc. are subordinate objects to a
> load balancer. What's more, attributes like 'protocol' and 'port' are not
> part of the load balancer object in that diagram (they're part of a 'VIP'
> in one proposed version, and part of a 'Listener' in the others).
>
> In your proposal, you state "only one port and one protocol per load
> balancer," and then later (on page 9 under "GET /vips") you show that a vip
> may have many load balancers associated with it. So clearly, "load
> balancer" the way you're using it is subordinate to a VIP. So in the
> glossary, it sounds like the object which has a single port and protocol
> associated with it that is subordinate to a VIP: A listener.
>
> Now, I don't really care if y'all decide to re-define "load balancer" from
> what is in the glossary so long as you do define it clearly in the
> proposal. (If we go with your proposal, it would then make sense to update
> the glossary accordingly.) Mostly, I'm just trying to avoid confusion
> because it's exactly these kinds of misunderstandings which have stymied
> discussion and progress in the past, eh.
>
> Also-- I can guess where the confusion comes from: I'm guessing most
> customers refer to "a service which listens on a tcp or udp port,
> understands a specific protocol, and forwards data from the connecting
> client to some back-end server which actually services the request" as a
> "load balancer." It's entirely possible that in the glossary and in
> previous discussions we've been mis-using the term (like we have with VIP).
> Personally, I suspect it's an overloaded term that in use in our industry
> means different things depending on context (and is probably often mis-used
> by people who don't understand what load balancing actually is). Again, I
> care less about what specific terms we decide on so long as we define them
> so that everyone can be on the same page and know what we're talking about.
> :)
>
> Stephen
>
>
>
> On Wed, Apr 16, 2014 at 7:17 PM, Brandon Logan
> <brandon.logan at rackspace.com>wrote:
>
>> You say 'only one port and protocol per load balancer', yet I don't know
>> how this works. Could you define what a 'load balancer' is in this case?
>> (port and protocol are attributes that I would associate with a TCP or UDP
>> listener of some kind.) Are you using 'load balancer' to mean 'listener'
>> in this case (contrary to previous discussion of this on this list and the
>> one defined here https://wiki.openstack.org/wiki/Neutron/LBaaS
>> /Glossary#Loadbalancer )?
>>
>>
>> Yes, it could be considered as a Listener according to that
>> documentation. The way to have a "listener" using the same VIP but listen
>> on two different ports is something we call VIP sharing. You would assign
>> a VIP to one load balancer that uses one port, and then assign that same
>> VIP to another load balancer but that load balancer is using a different
>> port than the first one. How the backend implements it is an
>> implementation detail (redudant, I know). In the case of HaProxy it would
>> just add the second port to the same config that the first load balancer
>> was using. In other drivers it might be different.
>
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/2f0ea5bf/attachment-0001.html>
>
> ------------------------------
>
> Message: 11
> Date: Thu, 17 Apr 2014 20:32:49 -0400
> From: Mike Spreitzer <mspreitz at us.ibm.com>
> To: "OpenStack Development Mailing List \(not for usage questions\)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest]
> Questions about images
> Message-ID:
> <OF4508D3A6.06E8B95B-ON85257CBE.0002B349-85257CBE.000300B6 at us.ibm.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Steven Dake <sdake at redhat.com> wrote on 04/16/2014 03:31:16 PM:
>
>> ...
>> Fedora 19 shipped in the Fedora cloud images does *NOT* include
>> heat-cfntools. The heat-cfntools package was added only in Fedora
>> 20 qcow2 images. Fedora 19 must be custom made which those
>> prebuilt-jeos-images are. They worked for me last time I fired up an
> image.
>>
>> Regards
>> -steve
>
> When I look at http://fedorapeople.org/groups/heat/prebuilt-jeos-images/
> today I see that F19-x86_64-cfntools.qcow2 is dated Feb 5, 2014. Wherever
> I download it, its MD5 hash is b8fa3cb4e044d4e2439229b55982225c. Have you
> succeeded with an image having that hash?
>
> Thanks,
> Mike
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/f86bf13a/attachment-0001.html>
>
> ------------------------------
>
> Message: 12
> Date: Thu, 17 Apr 2014 20:49:47 -0400
> From: Susanne Balle <sleipnir012 at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] HA functionality
> discussion
> Message-ID:
> <CADBYD+w-DLdqkdRQiv_-yb0wKW+no05RjLV8=RLppJKUBQqXdQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I agree that the HA should be hidden to the user/tenant. IMHO a tenant
> should just use a load-balancer as a ?managed? black box where the service
> is resilient in itself.
>
>
>
> Our current Libra/LBaaS implementation in the HP public cloud uses a pool
> of standby LB to replace failing tenant?s LB. Our LBaaS service is
> monitoring itself and replacing LB when they fail. This is via a set of
> Admin API server.
>
>
>
> http://libra.readthedocs.org/en/latest/admin_api/index.html
>
> The Admin server spawns several scheduled threads to run tasks such as
> building new devices for the pool, monitoring load balancer devices and
> maintaining IP addresses.
>
>
>
> http://libra.readthedocs.org/en/latest/pool_mgm/about.html
>
>
> Susanne
>
>
> On Thu, Apr 17, 2014 at 6:49 PM, Stephen Balukoff <sbalukoff at bluebox.net>wrote:
>
>> Heyas, y'all!
>>
>> So, given both the prioritization and usage info on HA functionality for
>> Neutron LBaaS here:
>> https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc&usp=sharing
>>
>> It's clear that:
>>
>> A. HA seems to be a top priority for most operators
>> B. Almost all load balancer functionality deployed is done so in an
>> Active/Standby HA configuration
>>
>> I know there's been some round-about discussion about this on the list in
>> the past (which usually got stymied in "implementation details"
>> disagreements), but it seems to me that with so many players putting a high
>> priority on HA functionality, this is something we need to discuss and
>> address.
>>
>> This is also apropos, as we're talking about doing a major revision of the
>> API, and it probably makes sense to seriously consider if or how HA-related
>> stuff should make it into the API. I'm of the opinion that almost all the
>> HA stuff should be hidden from the user/tenant, but that the admin/operator
>> at the very least is going to need to have some visibility into HA-related
>> functionality. The hope here is to discover what things make sense to have
>> as a "least common denominator" and what will have to be hidden behind a
>> driver-specific implementation.
>
>> I certainly have a pretty good idea how HA stuff works at our
>> organization, but I have almost no visibility into how this is done
>> elsewhere, leastwise not enough detail to know what makes sense to write
>> API controls for.
>>
>> So! Since gathering data about actual usage seems to have worked pretty
>> well before, I'd like to try that again. Yes, I'm going to be asking about
>> implementation details, but this is with the hope of discovering any "least
>> common denominator" factors which make sense to build API around.
>>
>> For the purposes of this document, when I say "load balancer devices" I
>> mean either physical or virtual appliances, or software executing on a host
>> somewhere that actually does the load balancing. It need not directly
>> correspond with anything physical... but probably does. :P
>>
>> And... all of these questions are meant to be interpreted from the
>> perspective of the cloud operator.
>>
>> Here's what I'm looking to learn from those of you who are allowed to
>> share this data:
>>
>> 1. Are your load balancer devices shared between customers / tenants, not
>> shared, or some of both?
>>
>> 1a. If shared, what is your strategy to avoid or deal with collisions of
>> customer rfc1918 address space on back-end networks? (For example, I know
>> of no load balancer device that can balance traffic for both customer A and
>> customer B if both are using the 10.0.0.0/24 subnet for their back-end
>> networks containing the nodes to be balanced, unless an extra layer of
>> NATing is happening somewhere.)
>>
>> 2. What kinds of metrics do you use in determining load balancing capacity?
>>
>> 3. Do you operate with a pool of unused load balancer device capacity
>> (which a cloud OS would need to keep track of), or do you spin up new
>> capacity (in the form of virtual servers, presumably) on the fly?
>>
>> 3a. If you're operating with a availability pool, can you describe how new
>> load balancer devices are added to your availability pool? Specifically,
>> are there any steps in the process that must be manually performed (ie. so
>> no API could help with this)?
>>
>> 4. How are new devices 'registered' with the cloud OS? How are they
>> removed or replaced?
>>
>> 5. What kind of visibility do you (or would you) allow your user base to
>> see into the HA-related aspects of your load balancing services?
>>
>> 6. What kind of functionality and visibility do you need into the
>> operations of your load balancer devices in order to maintain your
>> services, troubleshoot, etc.? Specifically, are you managing the
>> infrastructure outside the purview of the cloud OS? Are there certain
>> aspects which would be easier to manage if done within the purview of the
>> cloud OS?
>>
>> 7. What kind of network topology is used when deploying load balancing
>> functionality? (ie. do your load balancer devices live inside or outside
>> customer firewalls, directly on tenant networks? Are you using layer-3
>> routing? etc.)
>>
>> 8. Is there any other data you can share which would be useful in
>> considering features of the API that only cloud operators would be able to
>> perform?
>>
>>
>> And since we're one of these operators, here are my responses:
>>
>> 1. We have both shared load balancer devices and private load balancer
>> devices.
>>
>> 1a. Our shared load balancers live outside customer firewalls, and we use
>> IPv6 to reach individual servers behind the firewalls "directly." We have
>> followed a careful deployment strategy across all our networks so that IPv6
>> addresses between tenants do not overlap.
>>
>> 2. The most useful ones for us are "number of appliances deployed" and
>> "number and type of load balancing services deployed" though we also pay
>> attention to:
>> * Load average per "active" appliance
>> * Per appliance number and type of load balancing services deployed
>> * Per appliance bandwidth consumption
>> * Per appliance connections / sec
>> * Per appliance SSL connections / sec
>>
>> Since our devices are software appliances running on linux we also track
>> OS-level metrics as well, though these aren't used directly in the load
>> balancing features in our cloud OS.
>>
>> 3. We operate with an availability pool that our current cloud OS pays
>> attention to.
>>
>> 3a. Since the devices we use correspond to physical hardware this must of
>> course be rack-and-stacked by a datacenter technician, who also does
>> initial configuration of these devices.
>>
>> 4. All of our load balancers are deployed in an active / standby
>> configuration. Two machines which make up an active / standby pair are
>> registered with the cloud OS as a single unit that we call a "load balancer
>> cluster." Our availability pool consists of a whole bunch of these load
>> balancer clusters. (The devices themselves are registered individually at
>> the time the cluster object is created in our database.) There are a couple
>> manual steps in this process (currently handled by the datacenter techs who
>> do the racking and stacking), but these could be automated via API. In
>> fact, as we move to virtual appliances with these, we expect the entire
>> process to become automated via API (first cluster primitive is created,
>> and then "load balancer device objects" get attached to it, then the
>> cluster gets added to our availability pool.)
>>
>> Removal of a "cluster" object is handled by first evacuating any customer
>> services off the cluster, then destroying the load balancer device objects,
>> then the cluster object. Replacement of a single load balancer device
>> entails removing the dead device, adding the new one, synchronizing
>> configuration data to it, and starting services.
>>
>> 5. At the present time, all our load balancing services are deployed in an
>> active / standby HA configuration, so the user has no choice or visibility
>> into any HA details. As we move to Neutron LBaaS, we would like to give
>> users the option of deploying non-HA load balancing capacity. Therefore,
>> the only visibility we want the user to get is:
>>
>> * Choose whether a given load balancing service should be deployed in an
>> HA configuration ("flavor" functionality could handle this)
>> * See whether a running load balancing service is deployed in an HA
>> configuration (and see the "hint" for which physical or virtual device(s)
>> it's deployed on)
>> * Give a "hint" as to which device(s) a new load balancing service should
>> be deployed on (ie. for customers looking to deploy a bunch of test / QA /
>> etc. environments on the same device(s) to reduce costs).
>>
>> Note that the "hint" above corresponds to the "load balancing cluster"
>> alluded to above, not necessarily any specific physical or virtual device.
>> This means we retain the ability to switch out the underlying hardware
>> powering a given service at any time.
>>
>> Users may also see usage data, of course, but that's more of a generic
>> stats / billing function (which doesn't have to do with HA at all, really).
>>
>> 6. We need to see the status of all our load balancing devices, including
>> availability, current role (active or standby), and all the metrics listed
>> under 2 above. Some of this data is used for creating trend graphs and
>> business metrics, so being able to query the current metrics at any time
>> via API is important. It would also be very handy to query specific device
>> info (like revision of software on it, etc.) Our current cloud OS does all
>> this for us, and having Neutron LBaaS provide visibility into all of this
>> as well would be ideal. We do almost no management of our load balancing
>> services outside the purview of our current cloud OS.
>>
>> 7. Shared load balancers must live outside customer firewalls, private
>> load balancers typically live within customer firewalls (sometimes in a
>> DMZ). In any case, we use layer-3 routing (distributed using routing
>> protocols on our core networking gear and static routes on customer
>> firewalls) to route requests for "service IPs" to the "highly available
>> routing IPs" which live on the load balancers themselves. (When a fail-over
>> happens, at a low level, what's really going on is the "highly available
>> routing IPs" shift from the active to standby load balancer.)
>>
>> We have contemplated using layer-2 topology (ie. directly connected on the
>> same vlan / broadcast domain) and are building a version of our appliance
>> which can operate in this way, potentially reducing the reliance on layer-3
>> routes (and making things more friendly for the OpenStack environment,
>> which we understand probably isn't ready for layer-3 routing just yet).
>>
>> 8. I wrote this survey, so none come to mind for me. :)
>>
>> Stephen
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/47b5e716/attachment-0001.html>
>
> ------------------------------
>
> Message: 13
> Date: Thu, 17 Apr 2014 21:11:34 -0400
> From: Davanum Srinivas <davanum at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] oslo removal of use_tpool conf option
> Message-ID:
> <CANw6fcHOTzL3yjg1Hs8Yh+2+zH2AAV3kvSER++443jLVofxQjQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> I do agree with Chris that process-wise, there was a failure and we
> need a way to ensure that it does not happen again. (There was a
> config option lost in the shuffle w/o a deprecation period)
>
> On Thu, Apr 17, 2014 at 7:59 PM, Chris Behrens <cbehrens at codestud.com> wrote:
>>
>> On Apr 17, 2014, at 4:26 PM, Joshua Harlow <harlowja at yahoo-inc.com> wrote:
>>
>> Just an honest question (no negativity intended I swear!).
>>
>> If a configuration option exists and only works with a patched eventlet why
>> is that option an option to begin with? (I understand the reason for the
>> patch, don't get me wrong).
>>
>>
>> Right, it?s a valid question. This feature has existed one way or another in
>> nova for quite a while. Initially the implementation in nova was wrong. I
>> did not know that eventlet was also broken at the time, although I
>> discovered it in the process of fixing nova?s code. I chose to leave the
>> feature because it?s something that we absolutely need long term, unless you
>> really want to live with DB calls blocking the whole process. I know I
>> don?t. Unfortunately the bug in eventlet is out of our control. (I made an
>> attempt at fixing it, but it?s not 100%. Eventlet folks currently have an
>> alternative up that may or may not work? but certainly is not in a release
>> yet.) We have an outstanding bug on our side to track this, also.
>>
>> The below is comparing apples/oranges for me.
>>
>> - Chris
>>
>>
>> Most users would not be able to use such a configuration since they do not
>> have this patched eventlet (I assume a newer version of eventlet someday in
>> the future will have this patch integrated in it?) so although I understand
>> the frustration around this I don't understand why it would be an option in
>> the first place. An aside, if the only way to use this option is via a
>> non-standard eventlet then how is this option tested in the community, aka
>> outside of said company?
>>
>> An example:
>>
>> If yahoo has some patched kernel A that requires an XYZ config turned on in
>> openstack and the only way to take advantage of kernel A is with XYZ config
>> 'on', then it seems like that?s a yahoo only patch that is not testable and
>> useable for others, even if patched kernel A is somewhere on github it's
>> still imho not something that should be a option in the community (anyone
>> can throw stuff up on github and then say I need XYZ config to use it).
>>
>> To me non-standard patches that require XYZ config in openstack shouldn't be
>> part of the standard openstack, no matter the company. If patch A is in the
>> mainline kernel (or other mainline library), then sure it's fair game.
>>
>> -Josh
>>
>> From: Chris Behrens <cbehrens at codestud.com>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> <openstack-dev at lists.openstack.org>
>> Date: Thursday, April 17, 2014 at 3:20 PM
>> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
>> Subject: [openstack-dev] oslo removal of use_tpool conf option
>>
>>
>> I?m going to try to not lose my cool here, but I?m extremely upset by this.
>>
>> In December, oslo apparently removed the code for ?use_tpool? which allows
>> you to run DB calls in Threads because it was ?eventlet specific?. I noticed
>> this when a review was posted to nova to add the option within nova itself:
>>
>> https://review.openstack.org/#/c/59760/
>>
>> I objected to this and asked (more demanded) for this to be added back into
>> oslo. It was not. What I did not realize when I was reviewing this nova
>> patch, was that nova had already synced oslo?s change. And now we?ve
>> released Icehouse with a conf option missing that existed in Havana.
>> Whatever projects were using oslo?s DB API code has had this option
>> disappear (unless an alternative was merged). Maybe it?s only nova.. I don?t
>> know.
>>
>> Some sort of process broke down here. nova uses oslo. And oslo removed
>> something nova uses without deprecating or merging an alternative into nova
>> first. How I believe this should have worked:
>>
>> 1) All projects using oslo?s DB API code should have merged an alternative
>> first.
>> 2) Remove code from oslo.
>> 3) Then sync oslo.
>>
>> What do we do now? I guess we?ll have to back port the removed code into
>> nova. I don?t know about other projects.
>>
>> NOTE: Very few people are probably using this, because it doesn?t work
>> without a patched eventlet. However, Rackspace happens to be one that does.
>> And anyone waiting on a new eventlet to be released such that they could use
>> this with Icehouse is currently out of luck.
>>
>> - Chris
>>
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: http://davanum.wordpress.com
>
>
>
> ------------------------------
>
> Message: 14
> Date: Fri, 18 Apr 2014 01:11:54 +0000
> From: Carlos Garza <carlos.garza at rackspace.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] HA functionality
> discussion
> Message-ID: <0593A7D3-1772-4EC4-832E-80222017ABB4 at rackspace.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
> On Apr 17, 2014, at 5:49 PM, Stephen Balukoff <sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>>
> wrote:
>
> Heyas, y'all!
>
> So, given both the prioritization and usage info on HA functionality for Neutron LBaaS here: https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc&usp=sharing
>
> It's clear that:
>
> A. HA seems to be a top priority for most operators
> B. Almost all load balancer functionality deployed is done so in an Active/Standby HA configuration
>
> I know there's been some round-about discussion about this on the list in the past (which usually got stymied in "implementation details" disagreements), but it seems to me that with so many players putting a high priority on HA functionality, this is something we need to discuss and address.
>
> This is also apropos, as we're talking about doing a major revision of the API, and it probably makes sense to seriously consider if or how HA-related stuff should make it into the API. I'm of the opinion that almost all the HA stuff should be hidden from the user/tenant, but that the admin/operator at the very least is going to need to have some visibility into HA-related functionality. The hope here is to discover what things make sense to have as a "least common denominator" and what will have to be hidden behind a driver-specific implementation.
>
> I certainly have a pretty good idea how HA stuff works at our organization, but I have almost no visibility into how this is done elsewhere, leastwise not enough detail to know what makes sense to write API controls for.
>
> So! Since gathering data about actual usage seems to have worked pretty well before, I'd like to try that again. Yes, I'm going to be asking about implementation details, but this is with the hope of discovering any "least common denominator" factors which make sense to build API around.
>
> For the purposes of this document, when I say "load balancer devices" I mean either physical or virtual appliances, or software executing on a host somewhere that actually does the load balancing. It need not directly correspond with anything physical... but probably does. :P
>
> And... all of these questions are meant to be interpreted from the perspective of the cloud operator.
>
> Here's what I'm looking to learn from those of you who are allowed to share this data:
>
> 1. Are your load balancer devices shared between customers / tenants, not shared, or some of both?
> If by shared you mean the ability to add and delete loadbalancer Our loadbalancers are not shared by different customers which we call accounts. If your referring to networking then yes they are on the same clan. Our clusters are basically a physical grouping of 4 or 5 stingray devices that share IPs on the vip side. The configs are created on all stingray nodes in a cluster. If a stingray loadbalancer goes down all its vips will be taken over by one of the other 4 or 5 machines. We achieve HA by moving noisy customers IPs to another stingray node. The machine taking over an ip will send a gratuitous ARP response for the router to train its arp table on. Usually we have 2 stingray nodes available for fail over. We could have spread the load across all boxes evenly but we felt that if we were near the end of the capacity for a given cluster if one of the nodes tanked this would have degraded performance as the other nodes were already nearing capacity.
>
> We also have the usual dual switch dual router set up incase one dies config.
>
> 1a. If shared, what is your strategy to avoid or deal with collisions of customer rfc1918 address space on back-end networks? (For example, I know of no load balancer device that can balance traffic for both customer A and customer B if both are using the 10.0.0.0/24<http://10.0.0.0/24> subnet for their back-end networks containing the nodes to be balanced, unless an extra layer of NATing is happening somewhere.)
>
> We order a set of CIDR blocks from our backbone and route them to our Cluster via a 10Gig/s link which In our bigger clusters can be upgraded via link bonding.
> downstream we have two routes to one route for our own internal ServiceNet 10.0.0.0/8 space and the public Internet for everything not on our service net. Our pool members are specified by CIDR block only with no association to a layer 2 network. When customers create their cloud servers they will be assigned an IP with in the address space of 10.0.0.0/24 and also get a publicly routable IP address. At that point the customer cane achieve isolation via IP tables or what ever tools their VM supports. In theory a user could mistaking punch in an IP address in a node that doesn't belong to them but that just means the lb will route to only one machine but the loadbalancer would be useless at that point. We don't charge our users for bandwidth going across service net since each DC has its own service net and our customers want to have the LoadBalancer close to their servers anyways. If they want to host back end servers on say Amazon hostgater or what ever then then the loadbalancer will unfortunately route over the public interet for those. I'm not sure why customers would want to do this but we were flexible enough to support it. In short HA is achieved through shared ips between our Stingray Nodes.
> We have 2 fail over nodes that basically do nothing on standby just in case an active node suddenly dies. So I guess you could call this HA n+1. We also divide the cluster into two cabinets with a failover in each one heaven forbid a whole cabinet should sudden fail. We've never seen this happen Knock on wood.
>
>
> 2. What kinds of metrics do you use in determining load balancing capacity?
>
> so far we've been measuring bandwidth for the most part as it usually caps out before CPU does. Out newest Stingray nodes have 24 cores. We of course gather metrics for IP space left (So we can order more ahead of time) We have noticed that we are limited to horizontal scaling of around 6 stingray nodes. CPU load goes up after 6 nodes which we have determined to be due to the rapidly changing configs must be synced across all the stingray nodes, stingray has a pretty nasty flaw in how it sends its configs to its other stingray nodes in cluster.
>
> In the case of SSL if an SSL user uses SSL in mixed mode (meaning http and https not sure why'ed they do) we actually set up two virtual servers transparent to the customers so we track the ssl bandwidth separately but using the same SNMP call.
>
> 3. Do you operate with a pool of unused load balancer device capacity (which a cloud OS would need to keep track of), or do you spin up new capacity (in the form of virtual servers, presumably) on the fly?
>
> Kind of answered in question 1. This doesn't apply to much to us as we use physical loadbalancers behind our API. For CLB2.0 we would like to see how we would achieve the same level of HA in the virtual space world.
>
> 3a. If you're operating with a availability pool, can you describe how new load balancer devices are added to your availability pool? Specifically, are there any steps in the process that must be manually performed (ie. so no API could help with this)?
>
> The API could help with some aspects of this. For example we have and are advocating a separate management API thats separate from the public one that can do things like tell the provisioner(What your calling a scheduler) when new capacity is available how to route to it and store this in the database for the public API to use in determing how to allocate resources. Our management API in particular is used to add IPv4 address space to our database once backbone routes them to us. So its like our current process involves the classic Hey Back bone I'd like to order a new /22 were running low on ips. Then the management interface could then be called to add the CIDR block so that it can track the ips in its database.
>
>
> 4. How are new devices 'registered' with the cloud OS? How are they removed or replaced?
>
> 5. What kind of visibility do you (or would you) allow your user base to see into the HA-related aspects of your load balancing services?
> We don't. We view HA in terms of redundant hardware and floating IPs and being that end users don't control those its not visible. We do state our 4 nines uptime which hasten't been broken as well as compensation for violations of our end of the SLA agreement.
>
> http://www.rackspace.com/information/legal/cloud/sla
> https://status.rackspace.com/
>
> 6. What kind of functionality and visibility do you need into the operations of your load balancer devices in order to maintain your services, troubleshoot, etc.? Specifically, are you managing the infrastructure outside the purview of the cloud OS? Are there certain aspects which would be easier to manage if done within the purview of the cloud OS?
>
> We wrote SNMP tools to monitor the Stingray nodes which our executed by our Api nodes. Stingray offers a rich oid MIB that allows us to track pretty much anything. But we only look at Bandwidth In, Bandwidth out, and the number of concurrent connections. I'm considering adding CPU statistics now actually.
>
>
> 7. What kind of network topology is used when deploying load balancing functionality? (ie. do your load balancer devices live inside or outside customer firewalls, directly on tenant networks? Are you using layer-3 routing? etc.)
>
> Just pure layer3. This limitation has left us wanting a private networking solution and during that investigation we arrived here in Neutron/Lbaas.
>
>
>
> 8. Is there any other data you can share which would be useful in considering features of the API that only cloud operators would be able to perform?
>
> Shared and failoverable IPs is desired. but much of the HA stuff will come from the driver/provider. I just think floating IPs just needs to be supported by the api or at least query able to see if it supports floating ips.
>
> And since we're one of these operators, here are my responses:
>
> 1. We have both shared load balancer devices and private load balancer devices.
>
> 1a. Our shared load balancers live outside customer firewalls, and we use IPv6 to reach individual servers behind the firewalls "directly." We have followed a careful deployment strategy across all our networks so that IPv6 addresses between tenants do not overlap.
>
> yea us too. We hash the tenant_id into 32bits and use it in bits 64-96 leaving the customer with 32 bits to play for their hosts. If they need more then 4 billion then we have bigger problems. Our cluster is a /48 so were wasting 16 bits on nothing in the middle.
> Cluster, Tenant id, host_id
> CCCC:CCCC:CCCC:0000:TTTT:TTTT:HHHH:HHHH
>
>
>
> 2. The most useful ones for us are "number of appliances deployed" and "number and type of load balancing services deployed" though we also pay attention to:
> * Load average per "active" appliance
> * Per appliance number and type of load balancing services deployed
> * Per appliance bandwidth consumption
> * Per appliance connections / sec
> * Per appliance SSL connections / sec
>
> Since our devices are software appliances running on linux we also track OS-level metrics as well, though these aren't used directly in the load balancing features in our cloud OS.
>
> 3. We operate with an availability pool that our current cloud OS pays attention to.
>
> 3a. Since the devices we use correspond to physical hardware this must of course be rack-and-stacked by a datacenter technician, who also does initial configuration of these devices.
>
> 4. All of our load balancers are deployed in an active / standby configuration. Two machines which make up an active / standby pair are registered with the cloud OS as a single unit that we call a "load balancer cluster." Our availability pool consists of a whole bunch of these load balancer clusters. (The devices themselves are registered individually at the time the cluster object is created in our database.) There are a couple manual steps in this process (currently handled by the datacenter techs who do the racking and stacking), but these could be automated via API. In fact, as we move to virtual appliances with these, we expect the entire process to become automated via API (first cluster primitive is created, and then "load balancer device objects" get attached to it, then the cluster gets added to our availability pool.)
>
> Removal of a "cluster" object is handled by first evacuating any customer services off the cluster, then destroying the load balancer device objects, then the cluster object. Replacement of a single load balancer device entails removing the dead device, adding the new one, synchronizing configuration data to it, and starting services.
>
> 5. At the present time, all our load balancing services are deployed in an active / standby HA configuration, so the user has no choice or visibility into any HA details. As we move to Neutron LBaaS, we would like to give users the option of deploying non-HA load balancing capacity. Therefore, the only visibility we want the user to get is:
>
> * Choose whether a given load balancing service should be deployed in an HA configuration ("flavor" functionality could handle this)
> * See whether a running load balancing service is deployed in an HA configuration (and see the "hint" for which physical or virtual device(s) it's deployed on)
> * Give a "hint" as to which device(s) a new load balancing service should be deployed on (ie. for customers looking to deploy a bunch of test / QA / etc. environments on the same device(s) to reduce costs).
>
> Note that the "hint" above corresponds to the "load balancing cluster" alluded to above, not necessarily any specific physical or virtual device. This means we retain the ability to switch out the underlying hardware powering a given service at any time.
>
> Users may also see usage data, of course, but that's more of a generic stats / billing function (which doesn't have to do with HA at all, really).
>
> 6. We need to see the status of all our load balancing devices, including availability, current role (active or standby), and all the metrics listed under 2 above. Some of this data is used for creating trend graphs and business metrics, so being able to query the current metrics at any time via API is important. It would also be very handy to query specific device info (like revision of software on it, etc.) Our current cloud OS does all this for us, and having Neutron LBaaS provide visibility into all of this as well would be ideal. We do almost no management of our load balancing services outside the purview of our current cloud OS.
>
> 7. Shared load balancers must live outside customer firewalls, private load balancers typically live within customer firewalls (sometimes in a DMZ). In any case, we use layer-3 routing (distributed using routing protocols on our core networking gear and static routes on customer firewalls) to route requests for "service IPs" to the "highly available routing IPs" which live on the load balancers themselves. (When a fail-over happens, at a low level, what's really going on is the "highly available routing IPs" shift from the active to standby load balancer.)
>
> We have contemplated using layer-2 topology (ie. directly connected on the same vlan / broadcast domain) and are building a version of our appliance which can operate in this way, potentially reducing the reliance on layer-3 routes (and making things more friendly for the OpenStack environment, which we understand probably isn't ready for layer-3 routing just yet).
>
> 8. I wrote this survey, so none come to mind for me. :)
>
> Stephen
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140418/f1d24ef3/attachment-0001.html>
>
> ------------------------------
>
> Message: 15
> Date: Thu, 17 Apr 2014 20:31:44 -0500
> From: Brandon Logan <brandon.logan at rackspace.com>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements and API
> revision progress
> Message-ID: <53508080.7010603 at rackspace.com>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Stephen,
> Thanks for elaborating on this. I agreed and still do that our
> proposal's load balancer falls more in line with that glossary's term
> for "listener" and now can see the discrepancy with "load balancer".
> Yes, in this case the term "load balancer" would have to be redefined,
> but that doesn't mean it is the wrong thing to do.
>
> I've always been on the side of the Load Balancing as a Service API
> using a root object called a "load balancer". This just really makes
> sense to me and others, but obviously it doesn't for everyone. However,
> in our experience end users just understand the service better when the
> service takes in load balancer objects and returns load balancer objects.
>
> Also, since it has been tasked to defined a new API we felt that it was
> implied that some definitions were going to change, especially those
> that are subjective. There are definitely many definitions of a load
> balancer. Is a load balancer an appliance (virtual or physical) that
> load balances many protocols and ports and is it also one that load
> balances a single protocol on a single port? I would say that is
> definitely subjective. Obviously I, and others, feel that both of those
> are true. I would like to hear arguments as to why one of them is not
> true, though.
>
> Either way, we could have named that object a "sqonkey" and given a
> definition in that glossary. Now we can all agree that while that word
> is just an amazing word, its a terrible name to use in any context for
> this service. It seems to me that an API can define and also redefine
> subjective terms.
>
> I'm glad you don't find this as a deal breaker and are okay with
> redefining the term. I hope we all can come to agreement on an API and
> I hope it satisfies everyone's needs and ideas of a good API.
>
> Thanks,
> Brandon
>
> On 04/17/2014 07:03 PM, Stephen Balukoff wrote:
>> Hi Brandon!
>>
>> Per the meeting this morning, I seem to recall you were looking to
>> have me elaborate on why the term 'load balancer' as used in your API
>> proposal is significantly different from the term 'load balancer' as
>> used in the glossary at:
>> https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary
>>
>> As promised, here's my elaboration on that:
>>
>> The glossary above states: "An object that represent a logical load
>> balancer that may have multiple resources such as Vips, Pools,
>> Members, etc.Loadbalancer is a root object in the meaning described
>> above." and references the diagram here:
>> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#Loadbalancer_instance_solution
>>
>> On that diagram, it's clear that VIPs, & etc. are subordinate objects
>> to a load balancer. What's more, attributes like 'protocol' and 'port'
>> are not part of the load balancer object in that diagram (they're part
>> of a 'VIP' in one proposed version, and part of a 'Listener' in the
>> others).
>>
>> In your proposal, you state "only one port and one protocol per load
>> balancer," and then later (on page 9 under "GET /vips") you show that
>> a vip may have many load balancers associated with it. So clearly,
>> "load balancer" the way you're using it is subordinate to a VIP. So in
>> the glossary, it sounds like the object which has a single port and
>> protocol associated with it that is subordinate to a VIP: A listener.
>>
>> Now, I don't really care if y'all decide to re-define "load balancer"
>> from what is in the glossary so long as you do define it clearly in
>> the proposal. (If we go with your proposal, it would then make sense
>> to update the glossary accordingly.) Mostly, I'm just trying to avoid
>> confusion because it's exactly these kinds of misunderstandings which
>> have stymied discussion and progress in the past, eh.
>>
>> Also-- I can guess where the confusion comes from: I'm guessing most
>> customers refer to "a service which listens on a tcp or udp port,
>> understands a specific protocol, and forwards data from the connecting
>> client to some back-end server which actually services the request" as
>> a "load balancer." It's entirely possible that in the glossary and in
>> previous discussions we've been mis-using the term (like we have with
>> VIP). Personally, I suspect it's an overloaded term that in use in our
>> industry means different things depending on context (and is probably
>> often mis-used by people who don't understand what load balancing
>> actually is). Again, I care less about what specific terms we decide
>> on so long as we define them so that everyone can be on the same page
>> and know what we're talking about. :)
>>
>> Stephen
>>
>>
>>
>> On Wed, Apr 16, 2014 at 7:17 PM, Brandon Logan
>> <brandon.logan at rackspace.com <mailto:brandon.logan at rackspace.com>> wrote:
>>
>>> You say 'only one port and protocol per load balancer', yet I
>>> don't know how this works. Could you define what a 'load
>>> balancer' is in this case? (port and protocol are attributes
>>> that I would associate with a TCP or UDP listener of some kind.)
>>> Are you using 'load balancer' to mean 'listener' in this case
>>> (contrary to previous discussion of this on this list and the one
>>> defined here
>>> https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary#Loadbalancer
>>> <https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary#Loadbalancer>
>>> )?
>>
>> Yes, it could be considered as a Listener according to that
>> documentation. The way to have a "listener" using the same VIP
>> but listen on two different ports is something we call VIP
>> sharing. You would assign a VIP to one load balancer that uses
>> one port, and then assign that same VIP to another load balancer
>> but that load balancer is using a different port than the first
>> one. How the backend implements it is an implementation detail
>> (redudant, I know). In the case of HaProxy it would just add the
>> second port to the same config that the first load balancer was
>> using. In other drivers it might be different.
>>
>>
>>
>>
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/05ce1153/attachment-0001.html>
>
> ------------------------------
>
> Message: 16
> Date: Thu, 17 Apr 2014 18:39:22 -0700
> From: Stephen Balukoff <sbalukoff at bluebox.net>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaas] "Single call" API
> discussion
> Message-ID:
> <CAAGw+ZrvUv5H8nWzwpidNOFz4TLQi2zVD24bOPWZMjw085SeOQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello German and Brandon!
>
> Responses in-line:
>
>
> On Thu, Apr 17, 2014 at 3:46 PM, Brandon Logan
> <brandon.logan at rackspace.com>wrote:
>
>> Stephen,
>> I have responded to your questions below.
>>
>>
>> On 04/17/2014 01:02 PM, Stephen Balukoff wrote:
>>
>> Howdy folks!
>>
>> Based on this morning's IRC meeting, it seems to me there's some
>> contention and confusion over the need for "single call" functionality for
>> load balanced services in the new API being discussed. This is what I
>> understand:
>>
>> * Those advocating "single call" are arguing that this simplifies the
>> API for users, and that it more closely reflects the users' experience with
>> other load balancing products. They don't want to see this functionality
>> necessarily delegated to an orchestration layer (Heat), because
>> coordinating how this works across two OpenStack projects is unlikely to
>> see success (ie. it's hard enough making progress with just one project). I
>> get the impression that people advocating for this feel that their current
>> users would not likely make the leap to Neutron LBaaS unless some kind of
>> functionality or workflow is preserved that is no more complicated than
>> what they currently have to do.
>>
>> Another reason, which I've mentioned many times before and keeps getting
>> ignored, is because the more primitives you add the longer it will take to
>> provision a load balancer. Even if we relied on the orchestration layer to
>> build out all the primitives, it still will take much more time to
>> provision a load balancer than a single create call provided by the API.
>> Each request and response has an inherent time to process. Many primitives
>> will also have an inherent build time. Combine this in an environment that
>> becomes more and more dense, build times will become very unfriendly to end
>> users whether they are using the API directly, going through a UI, or going
>> through an orchestration layer. This industry is always trying to improve
>> build/provisioning times and there are no reasons why we shouldn't try to
>> achieve the same goal.
>
> Noted.
>
>
>> * Those (mostly) against the idea are interested in seeing the API
>> provide primitives and delegating "higher level" single-call stuff to Heat
>> or some other orchestration layer. There was also the implication that if
>> "single-call" is supported, it ought to support both simple and advanced
>> set-ups in that single call. Further, I sense concern that if there are
>> multiple ways to accomplish the same thing supported in the API, this
>> redundancy breeds complication as more features are added, and in
>> developing test coverage. And existing Neutron APIs tend to expose only
>> primitives. I get the impression that people against the idea could be
>> convinced if more compelling reasons were illustrated for supporting
>> single-call, perhaps other than "we don't want to change the way it's done
>> in our environment right now."
>>
>> I completely disagree with "we dont want to change the way it's done in
>> our environment right now". Our proposal has changed the way our current
>> API works right now. We do not have the notion of primitives in our
>> current API and our proposal included the ability to construct a load
>> balancer with primitives individually. We kept that in so that those
>> operators and users who do like constructing a load balancer that way can
>> continue doing so. What we are asking for is to keep our users happy when
>> we do deploy this in a production environment and maintain a single create
>> load balancer API call.
> There's certainly something to be said for having a less-disruptive user
> experience. And after all, what we've been discussing is so radical a
> change that it's close to starting over from scratch in many ways.
>
>
>>
>> I've mostly stayed out of this debate because our solution as used by
>> our customers presently isn't "single-call" and I don't really understand
>> the requirements around this.
>>
>> So! I would love it if some of you could fill me in on this, especially
>> since I'm working on a revision of the proposed API. Specifically, what I'm
>> looking for is answers to the following questions:
>>
>> 1. Could you please explain what you understand single-call API
>> functionality to be?
>>
>> Single-call API functionality is a call that supports adding multiple
>> features to an entity (load balancer in this case) in one API request.
>> Whether this supports all features of a load balancer or a subset is up for
>> debate. I prefer all features to be supported. Yes it adds complexity,
>> but complexity is always introduced by improving the end user experience
>> and I hope a good user experience is a goal.
>
> Got it. I think we all want to improve the user experience.
>
>>
>> 2. Could you describe the simplest use case that uses single-call API in
>> your environment right now? Please be very specific-- ideally, a couple
>> examples of specific CLI commands a user might run, or API (along with
>> specific configuration data) would be great.
>>
>>
>> http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Create_Load_Balancer-d1e1635.html
>>
>> This page has many different ways to configure a load balancer with one
>> call. It ranges from a simple load balancer to a load balancer with a much
>> more complicated configuration. Generally, if any of those features are
>> allowed on a load balancer then it is supported through the single call.
>>
>>
>> I'm going to use example 4.10 as the "simplest" case I'm seeing there.
> (Also because I severely dislike XML ;) )
>
>
>
>> 3. Could you describe the most complicated use case that your
>> single-call API supports? Again, please be very specific here.
>>
>> Same data can be derived from the link above.
> Ok, I'm actually not seeing and complicated examples, but I'm guessing that
> any attributes at the top of the page could be expanded on according the
> the syntax described.
>
> Hmmm... one of the draw-backs I see with a "one-call" approach is you've
> got to have really good syntax checking for everything right from the
> start, or (if you plan to handle primitives one at a time) a really solid
> roll-back strategy if anything fails or has problems, cleaning up any
> primitives that might already have been created before the whole call
> completes.
>
> The alternative is to not do this with primitives... but then I don't see
> how that's possible either. (And certainly not easy to write tests for:
> The great thing about small primitives is their methods tend to be easier
> to unit test.)
>
>>
>> 4. What percentage of your customer base are used to using single-call
>> functionality, and what percentage are used to manipulating primitives?
>>
>> 100% but just like others it is the only way to create a load balancer in
>> our API. So this data doesn't mean much.
>>
>> Oh! One other question:
>>
>> 5. Should "single-call" stuff work for the lifecycle of a load
>> balancing service? That is to say, should "delete" functionality also clean
>> up all primitives associated with the service?
>>
>> How we were thinking was that it would just "detach" the primitives from
>> the load balancer but keep them available for association with another load
>> balancer. A user would only be able to actually delete a primitive if it
>> went through the root primitive resource (i.e. /pools, /vips). However,
>> this is definitely up for debate and there are pros and cons to doing it
>> both ways. If the system completely deletes the primitives on the deletion
>> of the load balancer, then the system has to handle when one of those
>> primitives is being shared with another load balancer.
> That makes sense-- but I think it could end in disaster for the poor fool
> who isn't aware of that and makes "deploying load balancing services" part
> of their continuous integration run. In very little time, they'd have
> bazillions of abandoned primitives. At the same time, it doesn't make sense
> to delete shared primitives, lest you horribly break things for service B
> by nuking service A.
>
> So, going with the principle of least surprise here, it seems to me that
> most people attempting a delete in a single call are going to want all the
> non-shared primitives deleted (in a cascading effect) unless they specify
> that they want the primitives preserved. It would be great if there were a
> good way to set this as an option somehow (though I know an HTTP DELETE
> doesn't allow for this kind of flexibility-- maybe something appended to
> the URI if you want to preserve non-shared primitives?)
>
> Deleting a primitive (ie. not using single-call) should clearly just delete
> the primitive. Though, of course, it would be nice to specify (using some
> flag) that the delete should be ignored if the primitive happens to be
> shared.
>
>
>
> On Thu, Apr 17, 2014 at 2:23 PM, Eichberger, German <
> german.eichberger at hp.com> wrote:
>
>> Hi Stephen,
>>
>>
>>
>> 1. Could you please explain what you understand single-call API
>> functionality to be?
>>
>> From my perspective most of our users will likely create load balancers
>> via a web interface. Thought not necessary, having a single API call makes
>> it easier to develop the web interface.
>>
>>
>>
>> For the ?expert? users I envision them to create a load balancer, tweak
>> with the settings, and when they arrive at the load balancer they need to
>> automate the creation of it. So if they have to create several objects with
>> multiple calls in a particular order that is far too complicated and makes
>> the learning curve very steep from the GUI to the API. Hence, I like being
>> able to do one call and get a functioning load balancer. I like that aspect
>> from Jorge?s proposal. On the other hand making a single API call contain
>> all possible settings might make it too complex for the casual user who
>> just wants some feature activated the GUI doesn?t provide?.
>
> That makes sense. Are you envisioning having a function in the GUI to "show
> me the CLI or API command to do this" once a user has ticked all the
> check-boxes they want and filled in the fields they want?
>
> For our power users-- I could see some of them occasionally updating
> primitives. Probably the most common API command we see has to do with
> users who have written their own scaling algorithms which add and remove
> members from a pool as they see load on their app servers change throughout
> the day (and spin up / shut down app server clones in response).
>
>
>>
>>
>> 2. Could you describe the simplest use case that uses single-call API in
>> your environment right now?
>>
>> Please be very specific-- ideally, a couple examples of specific CLI
>> commands a user might run, or API (along with specific configuration data)
>> would be great.
>>
>>
>>
>>
>> http://libra.readthedocs.org/en/latest/api/rest/load-balancer.html#create-a-new-load-balancer
> Got it. Looks straight-forward.
>
>
>>
>>
>> 5. Should "single-call" stuff work for the lifecycle of a load balancing
>> service? That is to say, should "delete" functionality also clean up all
>> primitives associated with the service?
>>
>>
>>
>> Yes. If a customer doesn?t like a load balancer any longer one call will
>> remove it. This makes a lot of things easier:
>>
>> - GUI development ? one call does it all
>>
>> - Cleanup scripts: If a customer leaves us we just need to run
>> delete on a list of load balancers ? ideally if the API had a call to
>> delete all load balancers of a specific user/project that would be even
>> better J
>>
>> - The customer can tear down test/dev/etc. load balancer very
>> quickly
>>
>>
>> What do you think of my "conditional cascading delete" idea (ie. nuke
> everything but shared primitives) above for the usual / least surprise case?
>
>
> Thanks,
> Stephen
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, Inc.
> (800)613-4305 x807
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/61c5cd33/attachment-0001.html>
>
> ------------------------------
>
> Message: 17
> Date: Fri, 18 Apr 2014 01:42:27 +0000
> From: Joshua Harlow <harlowja at yahoo-inc.com>
> To: Chris Behrens <cbehrens at codestud.com>
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] oslo removal of use_tpool conf option
> Message-ID: <CF75D071.5C83F%harlowja at yahoo-inc.com>
> Content-Type: text/plain; charset="windows-1252"
>
> Thanks for the good explanation, was just a curiosity of mine.
>
> Any idea why it has taken so long for the eventlet folks to fix this (I know u proposed a patch/patches a while ago)? Is eventlet really that unmaintained? :(
>
> From: Chris Behrens <cbehrens at codestud.com<mailto:cbehrens at codestud.com>>
> Date: Thursday, April 17, 2014 at 4:59 PM
> To: Joshua Harlow <harlowja at yahoo-inc.com<mailto:harlowja at yahoo-inc.com>>
> Cc: Chris Behrens <cbehrens at codestud.com<mailto:cbehrens at codestud.com>>, "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] oslo removal of use_tpool conf option
>
>
> On Apr 17, 2014, at 4:26 PM, Joshua Harlow <harlowja at yahoo-inc.com<mailto:harlowja at yahoo-inc.com>> wrote:
>
> Just an honest question (no negativity intended I swear!).
>
> If a configuration option exists and only works with a patched eventlet why is that option an option to begin with? (I understand the reason for the patch, don't get me wrong).
>
>
> Right, it?s a valid question. This feature has existed one way or another in nova for quite a while. Initially the implementation in nova was wrong. I did not know that eventlet was also broken at the time, although I discovered it in the process of fixing nova?s code. I chose to leave the feature because it?s something that we absolutely need long term, unless you really want to live with DB calls blocking the whole process. I know I don?t. Unfortunately the bug in eventlet is out of our control. (I made an attempt at fixing it, but it?s not 100%. Eventlet folks currently have an alternative up that may or may not work? but certainly is not in a release yet.) We have an outstanding bug on our side to track this, also.
>
> The below is comparing apples/oranges for me.
>
> - Chris
>
>
> Most users would not be able to use such a configuration since they do not have this patched eventlet (I assume a newer version of eventlet someday in the future will have this patch integrated in it?) so although I understand the frustration around this I don't understand why it would be an option in the first place. An aside, if the only way to use this option is via a non-standard eventlet then how is this option tested in the community, aka outside of said company?
>
> An example:
>
> If yahoo has some patched kernel A that requires an XYZ config turned on in openstack and the only way to take advantage of kernel A is with XYZ config 'on', then it seems like that?s a yahoo only patch that is not testable and useable for others, even if patched kernel A is somewhere on github it's still imho not something that should be a option in the community (anyone can throw stuff up on github and then say I need XYZ config to use it).
>
> To me non-standard patches that require XYZ config in openstack shouldn't be part of the standard openstack, no matter the company. If patch A is in the mainline kernel (or other mainline library), then sure it's fair game.
>
> -Josh
>
> From: Chris Behrens <cbehrens at codestud.com<mailto:cbehrens at codestud.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Date: Thursday, April 17, 2014 at 3:20 PM
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: [openstack-dev] oslo removal of use_tpool conf option
>
>
> I?m going to try to not lose my cool here, but I?m extremely upset by this.
>
> In December, oslo apparently removed the code for ?use_tpool? which allows you to run DB calls in Threads because it was ?eventlet specific?. I noticed this when a review was posted to nova to add the option within nova itself:
>
> https://review.openstack.org/#/c/59760/
>
> I objected to this and asked (more demanded) for this to be added back into oslo. It was not. What I did not realize when I was reviewing this nova patch, was that nova had already synced oslo?s change. And now we?ve released Icehouse with a conf option missing that existed in Havana. Whatever projects were using oslo?s DB API code has had this option disappear (unless an alternative was merged). Maybe it?s only nova.. I don?t know.
>
> Some sort of process broke down here. nova uses oslo. And oslo removed something nova uses without deprecating or merging an alternative into nova first. How I believe this should have worked:
>
> 1) All projects using oslo?s DB API code should have merged an alternative first.
> 2) Remove code from oslo.
> 3) Then sync oslo.
>
> What do we do now? I guess we?ll have to back port the removed code into nova. I don?t know about other projects.
>
> NOTE: Very few people are probably using this, because it doesn?t work without a patched eventlet. However, Rackspace happens to be one that does. And anyone waiting on a new eventlet to be released such that they could use this with Icehouse is currently out of luck.
>
> - Chris
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140418/fdd17c4e/attachment-0001.html>
>
> ------------------------------
>
> Message: 18
> Date: Fri, 18 Apr 2014 01:54:23 +0000
> From: Kenichi Oomichi <oomichi at mxs.nes.nec.co.jp>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [QA] [Tempest] Spreadsheet for Nova API tests
> Message-ID:
> <663E0F2C6F9EEA4AAD1DEEBD89158AAEFD2627 at BPXM06GP.gisp.nec.co.jp>
> Content-Type: text/plain; charset="iso-2022-jp"
>
> Hi,
>
> We are using a google spreadsheet for managing new Nova API tests of Tempest.
> https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc#gid=2
> When creating a new Nova API test, please see the spreadsheet to avoid the
> conflicted work. We can create a test which "assignee" is empty on the sheet.
> Before posting a new patch, please write your name as "assignee".
>
> I found 3 same patches yesterday through patch reviews. All patches seemed
> nice, but 2 patches should be a waste. So I hope we will avoid it.
>
> Thanks
> Ken'ichi Ohmichi
>
>
>
>
> ------------------------------
>
> Message: 19
> Date: Thu, 17 Apr 2014 19:01:20 -0700
> From: Stephen Balukoff <sbalukoff at bluebox.net>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements and API
> revision progress
> Message-ID:
> <CAAGw+Zqg5HGt8nefSix+e7Db=-1hQzQLXYR8RaaL2WGfVSe+bQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Brandon,
>
> Yep! I agree that both those definitions are correct: It all depends on
> context.
>
> I'm usually OK with going with whatever definition is in popular use by the
> user-base. However, "load balancer" as a term is so ambiguous among people
> actually developing a cloud load balancing system that a definition or more
> specific term is probably called for. :)
>
> In any case, all I'm really looking for is a glossary of defined terms
> attached to the API proposal, especially for terms like this that can have
> several meanings depending on context. (That is to say, I don't think it's
> necessary to define "IP address" for example-- unless, say, the
> distinction between IPv4 or IPv6 becomes important to the conversation
> somehow.)
>
> In any case note that I actually like your API thus far and think it's a
> pretty good start: Y'all put forth the laudable effort to actually create
> it, there's obviously a lot of forethought put into your proposal, and that
> certainly deserves respect! In fact, my team and I will probably be
> building off of what you've started in creating our proposal (which, again,
> I hope to have in a "showable" state before next week's meeting, and which
> I'm anticipating won't be the final form this API revision takes anyway.)
>
> Thanks,
> Stephen
>
> "There are only two truly difficult problems in computer science: Naming
> things, cache invalidation, and off-by-one errors."
>
>
>
> On Thu, Apr 17, 2014 at 6:31 PM, Brandon Logan
> <brandon.logan at rackspace.com>wrote:
>
>> Stephen,
>> Thanks for elaborating on this. I agreed and still do that our proposal's
>> load balancer falls more in line with that glossary's term for "listener"
>> and now can see the discrepancy with "load balancer". Yes, in this case
>> the term "load balancer" would have to be redefined, but that doesn't mean
>> it is the wrong thing to do.
>>
>> I've always been on the side of the Load Balancing as a Service API using
>> a root object called a "load balancer". This just really makes sense to me
>> and others, but obviously it doesn't for everyone. However, in our
>> experience end users just understand the service better when the service
>> takes in load balancer objects and returns load balancer objects.
>>
>> Also, since it has been tasked to defined a new API we felt that it was
>> implied that some definitions were going to change, especially those that
>> are subjective. There are definitely many definitions of a load balancer.
>> Is a load balancer an appliance (virtual or physical) that load balances
>> many protocols and ports and is it also one that load balances a single
>> protocol on a single port? I would say that is definitely subjective.
>> Obviously I, and others, feel that both of those are true. I would like to
>> hear arguments as to why one of them is not true, though.
>>
>> Either way, we could have named that object a "sqonkey" and given a
>> definition in that glossary. Now we can all agree that while that word is
>> just an amazing word, its a terrible name to use in any context for this
>> service. It seems to me that an API can define and also redefine
>> subjective terms.
>>
>> I'm glad you don't find this as a deal breaker and are okay with
>> redefining the term. I hope we all can come to agreement on an API and I
>> hope it satisfies everyone's needs and ideas of a good API.
>>
>> Thanks,
>> Brandon
>>
>>
>> On 04/17/2014 07:03 PM, Stephen Balukoff wrote:
>>
>> Hi Brandon!
>>
>> Per the meeting this morning, I seem to recall you were looking to have
>> me elaborate on why the term 'load balancer' as used in your API proposal
>> is significantly different from the term 'load balancer' as used in the
>> glossary at: https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary
>>
>> As promised, here's my elaboration on that:
>>
>> The glossary above states: "An object that represent a logical load
>> balancer that may have multiple resources such as Vips, Pools, Members, etc.Loadbalancer
>> is a root object in the meaning described above." and references the
>> diagram here:
>> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#Loadbalancer_instance_solution
>>
>> On that diagram, it's clear that VIPs, & etc. are subordinate objects to
>> a load balancer. What's more, attributes like 'protocol' and 'port' are not
>> part of the load balancer object in that diagram (they're part of a 'VIP'
>> in one proposed version, and part of a 'Listener' in the others).
>>
>> In your proposal, you state "only one port and one protocol per load
>> balancer," and then later (on page 9 under "GET /vips") you show that a vip
>> may have many load balancers associated with it. So clearly, "load
>> balancer" the way you're using it is subordinate to a VIP. So in the
>> glossary, it sounds like the object which has a single port and protocol
>> associated with it that is subordinate to a VIP: A listener.
>>
>> Now, I don't really care if y'all decide to re-define "load balancer"
>> from what is in the glossary so long as you do define it clearly in the
>> proposal. (If we go with your proposal, it would then make sense to update
>> the glossary accordingly.) Mostly, I'm just trying to avoid confusion
>> because it's exactly these kinds of misunderstandings which have stymied
>> discussion and progress in the past, eh.
>>
>> Also-- I can guess where the confusion comes from: I'm guessing most
>> customers refer to "a service which listens on a tcp or udp port,
>> understands a specific protocol, and forwards data from the connecting
>> client to some back-end server which actually services the request" as a
>> "load balancer." It's entirely possible that in the glossary and in
>> previous discussions we've been mis-using the term (like we have with VIP).
>> Personally, I suspect it's an overloaded term that in use in our industry
>> means different things depending on context (and is probably often mis-used
>> by people who don't understand what load balancing actually is). Again, I
>> care less about what specific terms we decide on so long as we define them
>> so that everyone can be on the same page and know what we're talking about.
>> :)
>>
>> Stephen
>>
>>
>>
>> On Wed, Apr 16, 2014 at 7:17 PM, Brandon Logan <
>> brandon.logan at rackspace.com> wrote:
>>
>>> You say 'only one port and protocol per load balancer', yet I don't
>>> know how this works. Could you define what a 'load balancer' is in this
>>> case? (port and protocol are attributes that I would associate with a TCP
>>> or UDP listener of some kind.) Are you using 'load balancer' to mean
>>> 'listener' in this case (contrary to previous discussion of this on this
>>> list and the one defined here https://wiki.openstack.org/wiki/Neutron/
>>> LBaaS/Glossary#Loadbalancer )?
>>>
>>>
>>> Yes, it could be considered as a Listener according to that
>>> documentation. The way to have a "listener" using the same VIP but listen
>>> on two different ports is something we call VIP sharing. You would assign
>>> a VIP to one load balancer that uses one port, and then assign that same
>>> VIP to another load balancer but that load balancer is using a different
>>> port than the first one. How the backend implements it is an
>>> implementation detail (redudant, I know). In the case of HaProxy it would
>>> just add the second port to the same config that the first load balancer
>>> was using. In other drivers it might be different.
>>
>>
>>
>>
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/dc3f860c/attachment-0001.html>
>
> ------------------------------
>
> Message: 20
> Date: Thu, 17 Apr 2014 21:33:13 -0500
> From: Brandon Logan <brandon.logan at rackspace.com>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaas] "Single call" API
> discussion
> Message-ID: <53508EE9.9020904 at rackspace.com>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Hello again Stephen,
>
> As usual, responses in-line!
>
>
> On 04/17/2014 08:39 PM, Stephen Balukoff wrote:
>> Hello German and Brandon!
>>
>> Responses in-line:
>>
>>
>> On Thu, Apr 17, 2014 at 3:46 PM, Brandon Logan
>> <brandon.logan at rackspace.com <mailto:brandon.logan at rackspace.com>> wrote:
>>
>> Stephen,
>> I have responded to your questions below.
>>
>>
>> On 04/17/2014 01:02 PM, Stephen Balukoff wrote:
>>> Howdy folks!
>>>
>>> Based on this morning's IRC meeting, it seems to me there's some
>>> contention and confusion over the need for "single call"
>>> functionality for load balanced services in the new API being
>>> discussed. This is what I understand:
>>>
>>> * Those advocating "single call" are arguing that this simplifies
>>> the API for users, and that it more closely reflects the users'
>>> experience with other load balancing products. They don't want to
>>> see this functionality necessarily delegated to an orchestration
>>> layer (Heat), because coordinating how this works across two
>>> OpenStack projects is unlikely to see success (ie. it's hard
>>> enough making progress with just one project). I get the
>>> impression that people advocating for this feel that their
>>> current users would not likely make the leap to Neutron LBaaS
>>> unless some kind of functionality or workflow is preserved that
>>> is no more complicated than what they currently have to do.
>> Another reason, which I've mentioned many times before and keeps
>> getting ignored, is because the more primitives you add the longer
>> it will take to provision a load balancer. Even if we relied on
>> the orchestration layer to build out all the primitives, it still
>> will take much more time to provision a load balancer than a
>> single create call provided by the API. Each request and response
>> has an inherent time to process. Many primitives will also have
>> an inherent build time. Combine this in an environment that
>> becomes more and more dense, build times will become very
>> unfriendly to end users whether they are using the API directly,
>> going through a UI, or going through an orchestration layer. This
>> industry is always trying to improve build/provisioning times and
>> there are no reasons why we shouldn't try to achieve the same goal.
>>
>>
>> Noted.
>>
>>> * Those (mostly) against the idea are interested in seeing the
>>> API provide primitives and delegating "higher level" single-call
>>> stuff to Heat or some other orchestration layer. There was also
>>> the implication that if "single-call" is supported, it ought to
>>> support both simple and advanced set-ups in that single call.
>>> Further, I sense concern that if there are multiple ways to
>>> accomplish the same thing supported in the API, this redundancy
>>> breeds complication as more features are added, and in developing
>>> test coverage. And existing Neutron APIs tend to expose only
>>> primitives. I get the impression that people against the idea
>>> could be convinced if more compelling reasons were illustrated
>>> for supporting single-call, perhaps other than "we don't want to
>>> change the way it's done in our environment right now."
>> I completely disagree with "we dont want to change the way it's
>> done in our environment right now". Our proposal has changed the
>> way our current API works right now. We do not have the notion of
>> primitives in our current API and our proposal included the
>> ability to construct a load balancer with primitives individually.
>> We kept that in so that those operators and users who do like
>> constructing a load balancer that way can continue doing so. What
>> we are asking for is to keep our users happy when we do deploy
>> this in a production environment and maintain a single create load
>> balancer API call.
>>
>>
>> There's certainly something to be said for having a less-disruptive
>> user experience. And after all, what we've been discussing is so
>> radical a change that it's close to starting over from scratch in many
>> ways.
> Yes, we assumed that starting from scratch would be the case at least as
> far as the API is concerned.
>>
>>>
>>> I've mostly stayed out of this debate because our solution as
>>> used by our customers presently isn't "single-call" and I don't
>>> really understand the requirements around this.
>>>
>>> So! I would love it if some of you could fill me in on this,
>>> especially since I'm working on a revision of the proposed API.
>>> Specifically, what I'm looking for is answers to the following
>>> questions:
>>>
>>> 1. Could you please explain what you understand single-call API
>>> functionality to be?
>> Single-call API functionality is a call that supports adding
>> multiple features to an entity (load balancer in this case) in one
>> API request. Whether this supports all features of a load
>> balancer or a subset is up for debate. I prefer all features to
>> be supported. Yes it adds complexity, but complexity is always
>> introduced by improving the end user experience and I hope a good
>> user experience is a goal.
>>
>>
>> Got it. I think we all want to improve the user experience.
>>
>>>
>>> 2. Could you describe the simplest use case that uses single-call
>>> API in your environment right now? Please be very specific--
>>> ideally, a couple examples of specific CLI commands a user might
>>> run, or API (along with specific configuration data) would be great.
>> http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Create_Load_Balancer-d1e1635.html
>>
>> This page has many different ways to configure a load balancer
>> with one call. It ranges from a simple load balancer to a load
>> balancer with a much more complicated configuration. Generally,
>> if any of those features are allowed on a load balancer then it is
>> supported through the single call.
>> I'm going to use example 4.10 as the "simplest" case I'm seeing there.
>> (Also because I severely dislike XML ;) )
>>
>>> 3. Could you describe the most complicated use case that your
>>> single-call API supports? Again, please be very specific here.
>> Same data can be derived from the link above.
>>
>>
>> Ok, I'm actually not seeing and complicated examples, but I'm guessing
>> that any attributes at the top of the page could be expanded on
>> according the the syntax described.
>>
>> Hmmm... one of the draw-backs I see with a "one-call" approach is
>> you've got to have really good syntax checking for everything right
>> from the start, or (if you plan to handle primitives one at a time) a
>> really solid roll-back strategy if anything fails or has problems,
>> cleaning up any primitives that might already have been created before
>> the whole call completes.
>>
>> The alternative is to not do this with primitives... but then I don't
>> see how that's possible either. (And certainly not easy to write tests
>> for: The great thing about small primitives is their methods tend to
>> be easier to unit test.)
>
> Yes, most of those features that are in that document for a load
> balancer can be done in the single create call.
> There is definitely more validation added on for this call. The
> roll-back strategy is solid but its really pretty trivial especially
> when the code is designed well enough. Thought would have to go into
> making the code the best, but I don't see this as a bad thing at all.
> I'd prefer the code base be thoroughly thought out and designed to
> handle complexities elegantly.
>
> If the API only allowed primitives, and creating a load balancer in one
> call is done through an orchestration layer, that orchestration layer
> would then be responsible for rolling back. This could cause problems
> because rolling back may involve performing actions that are not exposed
> through the API and so the orchestration layer has no way to do a proper
> rollback.
>>
>>>
>>> 4. What percentage of your customer base are used to using
>>> single-call functionality, and what percentage are used to
>>> manipulating primitives?
>> 100% but just like others it is the only way to create a load
>> balancer in our API. So this data doesn't mean much.
>>
>> Oh! One other question:
>>
>> 5. Should "single-call" stuff work for the lifecycle of a load
>> balancing service? That is to say, should "delete" functionality
>> also clean up all primitives associated with the service?
>>
>> How we were thinking was that it would just "detach" the
>> primitives from the load balancer but keep them available for
>> association with another load balancer. A user would only be able
>> to actually delete a primitive if it went through the root
>> primitive resource (i.e. /pools, /vips). However, this is
>> definitely up for debate and there are pros and cons to doing it
>> both ways. If the system completely deletes the primitives on the
>> deletion of the load balancer, then the system has to handle when
>> one of those primitives is being shared with another load balancer.
>>
>>
>> That makes sense-- but I think it could end in disaster for the poor
>> fool who isn't aware of that and makes "deploying load balancing
>> services" part of their continuous integration run. In very little
>> time, they'd have bazillions of abandoned primitives. At the same
>> time, it doesn't make sense to delete shared primitives, lest you
>> horribly break things for service B by nuking service A.
>>
>> So, going with the principle of least surprise here, it seems to me
>> that most people attempting a delete in a single call are going to
>> want all the non-shared primitives deleted (in a cascading effect)
>> unless they specify that they want the primitives preserved. It would
>> be great if there were a good way to set this as an option somehow
>> (though I know an HTTP DELETE doesn't allow for this kind of
>> flexibility-- maybe something appended to the URI if you want to
>> preserve non-shared primitives?)
>>
>> Deleting a primitive (ie. not using single-call) should clearly just
>> delete the primitive. Though, of course, it would be nice to specify
>> (using some flag) that the delete should be ignored if the primitive
>> happens to be shared.
>
> This is definitely an option as well and gives you the best of both
> worlds. As long as there is a way to detach a primitive from the load
> balancer, this would be a good option (especially if not doing the
> cascading delete can be specified). I do like the idea of it cleaning
> up everything for the user, just like German said when a user leaves it
> could make life easier for someone. Though, I think the best option is
> the one that makes the most sense from an end-user's perspective, in
> which case I would need to spend more time weighing the pros and cons of
> all three.
>>
>>
>>
>> On Thu, Apr 17, 2014 at 2:23 PM, Eichberger, German
>> <german.eichberger at hp.com <mailto:german.eichberger at hp.com>> wrote:
>>
>> Hi Stephen,
>>
>> 1. Could you please explain what you understand single-call API
>> functionality to be?
>>
>> From my perspective most of our users will likely create load
>> balancers via a web interface. Thought not necessary, having a
>> single API call makes it easier to develop the web interface.
>>
>> For the "expert" users I envision them to create a load balancer,
>> tweak with the settings, and when they arrive at the load balancer
>> they need to automate the creation of it. So if they have to
>> create several objects with multiple calls in a particular order
>> that is far too complicated and makes the learning curve very
>> steep from the GUI to the API. Hence, I like being able to do one
>> call and get a functioning load balancer. I like that aspect from
>> Jorge's proposal. On the other hand making a single API call
>> contain all possible settings might make it too complex for the
>> casual user who just wants some feature activated the GUI doesn't
>> provide....
>>
>>
>> That makes sense. Are you envisioning having a function in the GUI to
>> "show me the CLI or API command to do this" once a user has ticked all
>> the check-boxes they want and filled in the fields they want?
>>
>> For our power users-- I could see some of them occasionally updating
>> primitives. Probably the most common API command we see has to do with
>> users who have written their own scaling algorithms which add and
>> remove members from a pool as they see load on their app servers
>> change throughout the day (and spin up / shut down app server clones
>> in response).
>>
>> 2. Could you describe the simplest use case that uses single-call
>> API in your environment right now?
>>
>> Please be very specific-- ideally, a couple examples of specific
>> CLI commands a user might run, or API (along with specific
>> configuration data) would be great.
>>
>> http://libra.readthedocs.org/en/latest/api/rest/load-balancer.html#create-a-new-load-balancer
>>
>>
>>
>> Got it. Looks straight-forward.
>>
>> 5. Should "single-call" stuff work for the lifecycle of a load
>> balancing service? That is to say, should "delete" functionality
>> also clean up all primitives associated with the service?
>>
>> Yes. If a customer doesn't like a load balancer any longer one
>> call will remove it. This makes a lot of things easier:
>>
>> -GUI development -- one call does it all
>>
>> -Cleanup scripts: If a customer leaves us we just need to run
>> delete on a list of load balancers -- ideally if the API had a
>> call to delete all load balancers of a specific user/project that
>> would be even better J
>>
>> -The customer can tear down test/dev/etc. load balancer very quickly
>>
>>
>> What do you think of my "conditional cascading delete" idea (ie. nuke
>> everything but shared primitives) above for the usual / least surprise
>> case?
>>
>> Thanks,
>> Stephen
>>
>>
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, Inc.
>> (800)613-4305 x807
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/a84f2dc6/attachment.html>
>
> ------------------------------
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> End of OpenStack-dev Digest, Vol 24, Issue 56
> *********************************************
More information about the OpenStack-dev
mailing list