[openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

Eugene Nikanorov enikanorov at mirantis.com
Tue Jun 3 03:53:59 UTC 2014


> Are you saying there is absolutely no API documentation outside of the
actual code like this (anywhere within Neutron? Or OpenStack?)
There's a separate project for tracking API documentation:
https://launchpad.net/openstack-api-site
But I don't think it suits the need discussed here.

> but doesn't not provide any explanation as to "why" it's being done in
that particular way
And why it is needed? I though you needed an API definition which would be
the base for coding at code sprint, which implies that participants already
know why the API was done this way.
I would also argue that answering 'why?' is only good in discussion; even
documentation doesn't need to do that.

> Further, it's almost impossible to have a high-level discussion of
intended functionality when looking at raw, undocumented code.
Well, however that's what we usually do: discuss design on reviews looking
at raw code. Of course high-level description helps, but we're talking
about API definition, right? Which means that high-level questions were
already discussed and we need just a set of attributes and relationships.

> Isn't an API revision (especially a major one like this) just another
specification?
Right, I'm just saying that any format of API revision that can be put into
.rst file will not be better than an actual piece of code defining the API.
It's fine of course to use specs to post API def in such format, but why
not post code to neutron gerrit directly?
I will not be reading yet another repo for detailed API definition, I'll be
reading code for API definition. I believe most other developers do the
same. It's openstack, you got to read code to understand what's happening.
Thanks to python, code is a good doc itself.

I think you need to look closer at how APIs are defined in neutron. I'm not
talking about, you know, any working code, when suggesting to put it in
neutron's gerrit.
Just the definition in the format that most neutron developers already got
used to.
Defining it in different format just will not buy us much imo, so why
bother?
And yes, API definition may change depending on high-level discussion
outcome, but since it will result in changing the text only (no testing
overhead), it's no big deal, it's exactly the same as fixing
definition wherever else.

Thanks,
Eugene.


On Tue, Jun 3, 2014 at 2:36 AM, Stephen Balukoff <sbalukoff at bluebox.net>
wrote:

> Eugene:
>
> Are you saying there is absolutely no API documentation outside of the
> actual code like this (anywhere within Neutron? Or OpenStack?)  Again, one
> of the major problems I have with suggestions like the one you just made is
> that code is *not* documentation. Code is a great way of expressing
> specific details and the "how" of the process the code is attempting to
> produce, but doesn't not provide any explanation as to "why" it's being
> done in that particular way. Further, it's almost impossible to have a
> high-level discussion of intended functionality when looking at raw,
> undocumented code. And even if any such discussions happen in the review
> process, it's impossible for someone new to get up to speed unless the gist
> of what was discussed is captured in some unified place somewhere.
>
> I had thought that neutron-specs was a good exception to this: Finally a
> place where some actual mostly up-to-date documentation lives that's being
> tracked in revision control, and where people can effectively discuss
> concerns at a high level. Isn't an API revision (especially a major one
> like this) just another specification?
>
> The dysfunction here isn't going to get any better unless we change
> something, eh. I thought I was pretty clear about this at the summit! Argh.
>
> I also can't help but notice that the example you linked is part of a
> larger gerrit review topic that includes all of the loadbalancer code (as
> of about a month ago?) If you're asking me to update the code containing
> the API class definitions, and then ensure that this actually works with
> other parts of the code in the same, you might as well be asking me to
> write all of this (major) revision to the LBaaS that we've been discussing
> myself! I don't think that's what you're asking me to do here, but then
> that leaves me really confused as to what you expect to see me produce?
>
> If you're really just looking for someone to update that (single)
> extension file with what we've discussed in on the mailing list, at the
> summit, and in the blueprint that Brandon has been working on (minus the
> stuff we're excluding for now, like TLS and L7?) then I can do this, but
> it'll probably take me about a week, which doesn't leave much time for
> discussion before the hack-a-thon in two weeks. Also: I've made no secret
> of the fact that I'm a pretty terrible python coder. :P  It seems to me
> that if you're really after code here, someone else might be able to
> produce this faster than me.
>
> I thought I was going to be producing a specification (which would be
> discussed / reviewed), and then we could get multiple people involved
> working in parallel on the code (which would be reviewed but probably see
> fewer drastic changes if the code is in accordance with the discussed
> specification). I hate being a bottleneck to this process.
>
> One other note: Isn't there a parallel discussion happening right now
> about exactly how to fit in the proposed changes (ie. as a "new version" of
> the LBaaS extension entirely)? And wouldn't the results of this discussion
> essentially potentially render any code changes developed until now moot?
> (Yet another reason not to do "high level" discussions at the code level.)
>
> Stephen
>
>
>
> On Mon, Jun 2, 2014 at 2:35 PM, Eugene Nikanorov <enikanorov at mirantis.com>
> wrote:
>
>> I'm actually talking about patch for neutron project itself.
>> That should be either something similar to
>> neutron/extensions/loadbalancer.py or a patch for this file.
>> Anyway the API docs that i've seen in neutron-specs just copy the REST
>> resource definitions from proposed code,
>> that's why I'm suggesting to use a code for API reference.
>>
>> If you look closely at how API is defined you'll see it's quite
>> self-explaining.
>>
>> Thanks,
>> Eugene.
>>
>> PS. Just an example:
>> https://review.openstack.org/#/c/60207/17/neutron/extensions/loadbalancer.py
>>
>>
>> On Tue, Jun 3, 2014 at 12:03 AM, Stephen Balukoff <sbalukoff at bluebox.net>
>> wrote:
>>
>>> Hi Eugene,
>>>
>>> Sounds good. Should I put it in neutron-specs/specs/juno or somewhere
>>> else?
>>>
>>> Thanks,
>>> Stephen
>>>
>>>
>>>
>>> On Mon, Jun 2, 2014 at 12:45 PM, Eugene Nikanorov <
>>> enikanorov at mirantis.com> wrote:
>>>
>>>> > Where do we actually keep the authoritative source for API
>>>> documentation?
>>>> I think it makes sense to actually put it in the code on gerrit and
>>>> discuss API details there, it might save another step.
>>>>
>>>> Thanks,
>>>> Eugene.
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Jun 2, 2014 at 10:29 PM, Stephen Balukoff <
>>>> sbalukoff at bluebox.net> wrote:
>>>>
>>>>> Hi Brandon,
>>>>>
>>>>> Apologies-- this slipped my mind last week. In any case yes, unless
>>>>> you've already got something in the works, I'd be happy to take this on.
>>>>> But I will need a little direction here:  Where do we actually keep the
>>>>> authoritative source for API documentation?  Should I just make this a text
>>>>> document that lives in the neutron-specs repository?
>>>>>
>>>>> Also, I'm assuming we want to start with where we left off on our
>>>>> mailing list / google doc discussion (with changes from the summit, ie.
>>>>> loadbalancer as root) made part of this specification?
>>>>>
>>>>> Thanks,
>>>>> Stephen
>>>>>
>>>>>
>>>>> On Fri, May 30, 2014 at 4:10 PM, Brandon Logan <
>>>>> brandon.logan at rackspace.com> wrote:
>>>>>
>>>>>> Stephen,
>>>>>>
>>>>>> Were you still planning on doing the second blueprint that will
>>>>>> implement the new API calls?
>>>>>>
>>>>>> Thanks,
>>>>>> Brandon
>>>>>>
>>>>>> On Thu, 2014-05-29 at 22:36 -0700, Bo Lin wrote:
>>>>>> > Hi Brandon and Stephen,
>>>>>> > Really thanks for your responses and i got to know it.
>>>>>> >
>>>>>> >
>>>>>> > Thanks!
>>>>>> > ---Bo
>>>>>> >
>>>>>> >
>>>>>> ______________________________________________________________________
>>>>>> > From: "Brandon Logan" <brandon.logan at RACKSPACE.COM>
>>>>>> > To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>> > <openstack-dev at lists.openstack.org>
>>>>>> > Sent: Friday, May 30, 2014 1:17:57 PM
>>>>>> > Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions
>>>>>> in
>>>>>> > object model refactor blueprint
>>>>>> >
>>>>>> >
>>>>>> > Hi Bo,
>>>>>> > Sorry, I forgot to respond but yes what Stephen said lol :)
>>>>>> >
>>>>>> >
>>>>>> ______________________________________________________________________
>>>>>> > From: Stephen Balukoff [sbalukoff at bluebox.net]
>>>>>> > Sent: Thursday, May 29, 2014 10:42 PM
>>>>>> > To: OpenStack Development Mailing List (not for usage questions)
>>>>>> > Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions
>>>>>> in
>>>>>> > object model refactor blueprint
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > Hi Bo--
>>>>>> >
>>>>>> >
>>>>>> > Haproxy is able to have IPv4 front-ends with IPv6 back-ends (and
>>>>>> visa
>>>>>> > versa) because it actually initiates a separate TCP connection
>>>>>> between
>>>>>> > the front end client and the back-end server. The front-end thinks
>>>>>> > haproxy is the server, and the back-end thinks haproxy is the
>>>>>> client.
>>>>>> > In practice, therefore, its totally possible to have an IPv6
>>>>>> front-end
>>>>>> > and IPv4 back-end with haproxy (for both http and generic TCP
>>>>>> service
>>>>>> > types).
>>>>>> >
>>>>>> >
>>>>>> > I think this is similarly true for vendor appliances that are
>>>>>> capable
>>>>>> > of doing IPv6, and are also initiating new TCP connections from the
>>>>>> > appliance to the back-end.
>>>>>> >
>>>>>> >
>>>>>> > Obviously, the above won't work if your load balancer implementation
>>>>>> > is doing something "transparent" on the network layer like LVM load
>>>>>> > balancing.
>>>>>> >
>>>>>> >
>>>>>> > Stephen
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > On Wed, May 28, 2014 at 9:14 PM, Bo Lin <linb at vmware.com> wrote:
>>>>>> >         Hi Brandon,
>>>>>> >
>>>>>> >         I have one question. If we support LoadBalancer to Listener
>>>>>> >         relationship M:N, then one listener with IPV4 service
>>>>>> members
>>>>>> >         backend may be shared by a loadbalancer instance with IPV6
>>>>>> >         forntend. Does it mean we also need to provide IPV6 - IPV4
>>>>>> >         port forwarding functions in LBaaS services products? Does
>>>>>> >         iptables or most LBaaS services products such as haproxy or
>>>>>> so
>>>>>> >         on provide such function? Or I am just wrong in some
>>>>>> technique
>>>>>> >         details on these LBaaS products.
>>>>>> >
>>>>>> >
>>>>>> >         Thanks!
>>>>>> >
>>>>>> >
>>>>>> ______________________________________________________________
>>>>>> >         From: "Vijay B" <os.vbvs at gmail.com>
>>>>>> >
>>>>>> >         To: "OpenStack Development Mailing List (not for usage
>>>>>> >         questions)" <openstack-dev at lists.openstack.org>
>>>>>> >
>>>>>> >         Sent: Thursday, May 29, 2014 6:18:42 AM
>>>>>> >
>>>>>> >         Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered
>>>>>> >         questions in object model refactor blueprint
>>>>>> >
>>>>>> >
>>>>>> >         Hi Brandon!
>>>>>> >
>>>>>> >
>>>>>> >         Please see inline..
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >         On Wed, May 28, 2014 at 12:01 PM, Brandon Logan
>>>>>> >         <brandon.logan at rackspace.com> wrote:
>>>>>> >                 Hi Vijay,
>>>>>> >
>>>>>> >                 On Tue, 2014-05-27 at 16:27 -0700, Vijay B wrote:
>>>>>> >                 > Hi Brandon,
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > The current reviews of the schema itself are
>>>>>> >                 absolutely valid and
>>>>>> >                 > necessary, and must go on. However, the place of
>>>>>> >                 implementation of
>>>>>> >                 > this schema needs to be clarified. Rather than
>>>>>> make
>>>>>> >                 any changes
>>>>>> >                 > whatsoever to the existing neutron db schema for
>>>>>> >                 LBaaS, this new db
>>>>>> >                 > schema outlined needs to be implemented for a
>>>>>> >                 separate LBaaS core
>>>>>> >                 > service.
>>>>>> >                 >
>>>>>> >
>>>>>> >                 Are you suggesting a separate lbaas database from
>>>>>> the
>>>>>> >                 neutron database?
>>>>>> >                 If not, then I could use some clarification. If so,
>>>>>> >                 I'd advocate against
>>>>>> >                 that right now because there's just too many things
>>>>>> >                 that would need to
>>>>>> >                 be changed.  Later, when LBaaS becomes its own
>>>>>> service
>>>>>> >                 then yeah that
>>>>>> >                 will need to happen.
>>>>>> >
>>>>>> >
>>>>>> >         v> Ok, so as I understand it, in this scheme, there is no
>>>>>> new
>>>>>> >         schema or db, there will be a new set of tables resident in
>>>>>> >         neutron_db schema itself, alongside legacy lbaas tables.
>>>>>> Let's
>>>>>> >         consider a rough view of the implementation.
>>>>>> >
>>>>>> >
>>>>>> >         Layer 1 - We'll have a new lbaas v3.0 api in neutron, with
>>>>>> the
>>>>>> >         current lbaas service plugin having to support it in
>>>>>> addition
>>>>>> >         to the legacy lbaas extensions that it already supports.
>>>>>> We'll
>>>>>> >         need to put in new code anyway that will process the v3.0
>>>>>> >         lbaas api no matter what our approach is.
>>>>>> >         Layer 2 - Management code that will take care of updating
>>>>>> the
>>>>>> >         db with entities in pending_create, then invoking the right
>>>>>> >         provider driver, choosing/scheduling the plugin drivers or
>>>>>> the
>>>>>> >         agent drivers, invoking them, getting the results, and
>>>>>> >         updating the db.
>>>>>> >         Layer 3 - The drivers themselves (either plugin drivers
>>>>>> (like
>>>>>> >         the HAProxy namespace driver/Netscaler) or plugin drivers +
>>>>>> >         agent drivers).
>>>>>> >
>>>>>> >
>>>>>> >         While having the new tables sit alongside the legacy tables
>>>>>> is
>>>>>> >         one way to implement the changes, I don't see how this
>>>>>> >         approach leads to a lesser amount of changes overall. Layer
>>>>>> 2
>>>>>> >         above will be the major place where changes can be
>>>>>> >         complicated. Also, it will be confusing to have two sets of
>>>>>> >         lbaas tables in the same schema.
>>>>>> >
>>>>>> >
>>>>>> >         I don't want a separate lbaas database under neutron, and
>>>>>> >         neither do I want it within neutron. I'm not suggesting that
>>>>>> >         we create a db schema alone, I'm saying we must build it
>>>>>> with
>>>>>> >         the new LBaaS service (just like neutron itself when it got
>>>>>> >         created). If we don't do this now, we'll end up
>>>>>> reimplementing
>>>>>> >         the logic implemented in neutron for the new lbaas v3.0 API
>>>>>> >         all over again for the new core LBaaS service. We'd rather
>>>>>> do
>>>>>> >         it in the new one in one effort.
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >         I could be missing some constraints that drive taking the
>>>>>> >         former approach - please help me understand those - I don't
>>>>>> >         want to be discounting any one approach without thorough
>>>>>> >         consideration. Right now, it looks to me like this approach
>>>>>> is
>>>>>> >         being taken only to accommodate the HAProxy namespace
>>>>>> driver.
>>>>>> >         Really that is the only driver which seems to be very
>>>>>> >         intertwined with neutron in the way it uses namespaces.
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >                 >
>>>>>> >                 > What we should be providing in neutron is a switch
>>>>>> >                 (a global conf)
>>>>>> >                 > that can be set to instruct neutron to do one of
>>>>>> two
>>>>>> >                 things:
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > 1. Use the existing neutron LBaaS API, with the
>>>>>> >                 backend being the
>>>>>> >                 > existing neutron LBaaS db schema. This is the
>>>>>> status
>>>>>> >                 quo.
>>>>>> >                 > 2. Use the existing neutron LBaaS API, with the
>>>>>> >                 backend being the new
>>>>>> >                 > LBaaS service. This will invoke calls not to
>>>>>> >                 neutron's current LBaaS
>>>>>> >                 > code at all, rather, it will call into a new set
>>>>>> of
>>>>>> >                 proxy "backend"
>>>>>> >                 > code in neutron that will translate the older
>>>>>> LBaaS
>>>>>> >                 API calls into the
>>>>>> >                 > newer REST calls serviced by the new LBaaS
>>>>>> service,
>>>>>> >                 which will write
>>>>>> >                 > down these details accordingly in its new db
>>>>>> schema.
>>>>>> >                 As long as the
>>>>>> >                 > request and response objects to legacy neutron
>>>>>> LBaaS
>>>>>> >                 calls are
>>>>>> >                 > preserved as is, there should be no issues.
>>>>>> Writing
>>>>>> >                 unit tests should
>>>>>> >                 > also be comparatively more straightforward, and
>>>>>> old
>>>>>> >                 functional tests
>>>>>> >                 > can be retained, and newer ones will not clash
>>>>>> with
>>>>>> >                 legacy code.
>>>>>> >                 > Legacy code itself will work, having not been
>>>>>> >                 touched at all. The
>>>>>> >                 > blueprint for the db schema that you have
>>>>>> referenced
>>>>>> >                 >
>>>>>> >                 (
>>>>>> https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst)
>>>>>> should be implemented for this new LBaaS service, post reviews.
>>>>>> >                 >
>>>>>> >
>>>>>> >                 I think the point of this blueprint is to get the
>>>>>> API
>>>>>> >                 and object model
>>>>>> >                 less confusing for the Neutron LBaaS service plugin.
>>>>>> >                  I think it's too
>>>>>> >                 early to create an LBaaS service because we have not
>>>>>> >                 yet cleaned up the
>>>>>> >                 tight integration points between Neutron LBaaS and
>>>>>> >                 LBaaS.  Creating a
>>>>>> >                 new service would require only API interactions
>>>>>> >                 between Neutron and this
>>>>>> >                 LBaaS service, which currently is not possible due
>>>>>> to
>>>>>> >                 these tight
>>>>>> >                 integration points.
>>>>>> >
>>>>>> >
>>>>>> >         v> The tight integration points between LBaaS and neutron
>>>>>> that
>>>>>> >         I see are:
>>>>>> >
>>>>>> >
>>>>>> >         1. The usage of namespaces.
>>>>>> >         2.  L2 and L3 plumbing within the namespaces and tracking
>>>>>> them
>>>>>> >         in the neutron and lbaas tables,
>>>>>> >         3. Plugin driver and agent driver scheduling
>>>>>> >         framework/mechanism for LB drivers.
>>>>>> >         4. The way drivers directly update the neutron db, which I
>>>>>> >         think makes for a lack of clear functional demarcation.
>>>>>> >
>>>>>> >
>>>>>> >         Regardless of how we use the new API and db model, will
>>>>>> >         namespaces be used? If they still need to be supported, the
>>>>>> >         tight integration isn't going to go anywhere.
>>>>>> >
>>>>>> >
>>>>>> >         This is why I think it will be best to keep the legacy
>>>>>> drivers
>>>>>> >         within neutron, and not give an option to newer deployments
>>>>>> to
>>>>>> >         use that concurrently with the new lbaas core service. The
>>>>>> >         changes will be lesser this way because we won't touch
>>>>>> legacy
>>>>>> >         code.
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >         While I fully understand that we're trying to change the way
>>>>>> >         we look at the lbaas deployments, and the db object model is
>>>>>> >         an effort towards that, we need to ensure that the execution
>>>>>> >         is kept elegant as well. For drivers for lb solutions like
>>>>>> f5
>>>>>> >         or Netscaler, these pain points can be done away with
>>>>>> because
>>>>>> >         they do their own network provisioning and we keep track of
>>>>>> >         them only to clean up (especially for virtual appliance
>>>>>> >         solutions).
>>>>>> >
>>>>>> >
>>>>>> >         It will however mean that we'll have the additional task of
>>>>>> >         implementing the new core service before we can use the new
>>>>>> db
>>>>>> >         object model. I say we should just go for that effort and
>>>>>> make
>>>>>> >         it happen.
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >                 >
>>>>>> >                 > The third option would be to turn off neutron
>>>>>> LBaaS
>>>>>> >                 API, and use the
>>>>>> >                 > new LBaaS core service directly, but for this we
>>>>>> can
>>>>>> >                 simply disable
>>>>>> >                 > neutron lbaas, and don't need a config parameter
>>>>>> in
>>>>>> >                 neutron.
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > Implementing this db schema within neutron instead
>>>>>> >                 will be not just
>>>>>> >                 > complicated, but a huge effort that will go waste
>>>>>> in
>>>>>> >                 future once the
>>>>>> >                 > new LBaaS service is implemented. Also, migration
>>>>>> >                 will unnecessarily
>>>>>> >                 > retain the same steps needed to go from legacy
>>>>>> >                 neutron LBaaS to the
>>>>>> >                 > new core LBaaS service in this approach (twice, in
>>>>>> >                 succession) in case
>>>>>> >                 > for any reason the version goes from legacy
>>>>>> neutron
>>>>>> >                 LBaaS -> new
>>>>>> >                 > neutron LBaaS -> new LBaaS core service.
>>>>>> >
>>>>>> >                 I totally agree that this is technical debt, but I
>>>>>> >                 believe it is the
>>>>>> >                 best option we have right now since LBaaS needs to
>>>>>> >                 live in the Neutron
>>>>>> >                 code and process because of the tight integration
>>>>>> >                 points.  Since this
>>>>>> >                 object model refactor has been slated for Juno, and
>>>>>> >                 these tight
>>>>>> >                 integration points may or may not be cleaned up by
>>>>>> >                 Juno, staying within
>>>>>> >                 Neutron seems to be the best option right now.
>>>>>> >
>>>>>> >
>>>>>> >         v> As I described above, I think the tight integration
>>>>>> points
>>>>>> >         are best kept in legacy code and not carried over to the new
>>>>>> >         implementation. The cleanest way to do it would be to
>>>>>> clearly
>>>>>> >         demarcate neutron related operations (L2/L3) from LBaaS.
>>>>>> But I
>>>>>> >         am keen to get your views on what the difficult integration
>>>>>> >         points are so that I get a better understanding of the
>>>>>> >         motivations behind keeping the new tables in neutron.
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >         Regards,
>>>>>> >         Vijay
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > Going forward, the legacy neutron LBaaS API can be
>>>>>> >                 deprecated, and the
>>>>>> >                 > new API that directly contacts the new LBaaS core
>>>>>> >                 service can be used.
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > We have discussed the above architecture
>>>>>> previously,
>>>>>> >                 but outside of
>>>>>> >                 > the ML, and a draft of the blueprint for this new
>>>>>> >                 LBaaS core service
>>>>>> >                 > is underway, and is a collation of all the
>>>>>> >                 discussions among a large
>>>>>> >                 > number of LBaaS engineers including yourself
>>>>>> during
>>>>>> >                 the summit - I
>>>>>> >                 > will be posting it for review within a couple of
>>>>>> >                 days, as planned.
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > Regards,
>>>>>> >                 > Vijay
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > On Tue, May 27, 2014 at 12:32 PM, Brandon Logan
>>>>>> >                 > <brandon.logan at rackspace.com> wrote:
>>>>>> >                 >         Referencing this blueprint:
>>>>>> >                 >
>>>>>> >
>>>>>> https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst
>>>>>> >                 >
>>>>>> >                 >         Anyone who has suggestions to possible
>>>>>> >                 issues or can answer
>>>>>> >                 >         some of
>>>>>> >                 >         these questions please respond.
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 >         1. LoadBalancer to Listener relationship
>>>>>> M:N
>>>>>> >                 vs 1:N
>>>>>> >                 >         The main reason we went with the M:N was
>>>>>> so
>>>>>> >                 IPv6 could use the
>>>>>> >                 >         same
>>>>>> >                 >         listener as IPv4.  However this can be
>>>>>> >                 accomplished by the
>>>>>> >                 >         user just
>>>>>> >                 >         creating a second listener and pool with
>>>>>> the
>>>>>> >                 same
>>>>>> >                 >         configuration.  This
>>>>>> >                 >         will end up being a bad user experience
>>>>>> when
>>>>>> >                 the listener and
>>>>>> >                 >         pool
>>>>>> >                 >         configuration starts getting complex
>>>>>> (adding
>>>>>> >                 in TLS, health
>>>>>> >                 >         monitors,
>>>>>> >                 >         SNI, etc). A good reason to not do the M:N
>>>>>> >                 is because the
>>>>>> >                 >         logic on might
>>>>>> >                 >         get complex when dealing with status.  I'd
>>>>>> >                 like to get
>>>>>> >                 >         people's opinions
>>>>>> >                 >         on this on whether we should do M:N or
>>>>>> just
>>>>>> >                 1:N.  Another
>>>>>> >                 >         option, is to
>>>>>> >                 >         just implement 1:N right now and later
>>>>>> >                 implement the M:N in
>>>>>> >                 >         another
>>>>>> >                 >         blueprint if it is decided that the user
>>>>>> >                 experience suffers
>>>>>> >                 >         greatly.
>>>>>> >                 >
>>>>>> >                 >         My opinion: I like the idea of leaving it
>>>>>> to
>>>>>> >                 another blueprint
>>>>>> >                 >         to
>>>>>> >                 >         implement.  However, we would need to
>>>>>> watch
>>>>>> >                 out for any major
>>>>>> >                 >         architecture changes in the time itis not
>>>>>> >                 implemented that
>>>>>> >                 >         could make
>>>>>> >                 >         this more difficult than what it needs to
>>>>>> >                 be.
>>>>>> >                 >
>>>>>> >                 >         2. Pool to Health Monitor relationship 1:N
>>>>>> >                 vs 1:1
>>>>>> >                 >         Currently, I believe this is 1:N however
>>>>>> it
>>>>>> >                 was suggested to
>>>>>> >                 >         deprecate
>>>>>> >                 >         this in favor of 1:1 by Susanne and Kyle
>>>>>> >                 agreed.  Are there
>>>>>> >                 >         any
>>>>>> >                 >         objections to channging to 1:1?
>>>>>> >                 >
>>>>>> >                 >         My opinion: I'm for 1:1 as long as there
>>>>>> >                 aren't any major
>>>>>> >                 >         reasons why
>>>>>> >                 >         there needs to be 1:N.
>>>>>> >                 >
>>>>>> >                 >         3. Does the Pool object need a status
>>>>>> field
>>>>>> >                 now that it is a
>>>>>> >                 >         pure
>>>>>> >                 >         logical object?
>>>>>> >                 >
>>>>>> >                 >         My opinion: I don't think it needs the
>>>>>> >                 status field.  I think
>>>>>> >                 >         the
>>>>>> >                 >         LoadBalancer object may be the only thing
>>>>>> >                 that needs a status,
>>>>>> >                 >         other
>>>>>> >                 >         than the pool members for health
>>>>>> >                 monitoring.  I might be
>>>>>> >                 >         corrected on
>>>>>> >                 >         this though.
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 _______________________________________________
>>>>>> >                 >         OpenStack-dev mailing list
>>>>>> >                 >         OpenStack-dev at lists.openstack.org
>>>>>> >                 >
>>>>>> >
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> >                 >
>>>>>> >                 >
>>>>>> >                 > _______________________________________________
>>>>>> >                 > OpenStack-dev mailing list
>>>>>> >                 > OpenStack-dev at lists.openstack.org
>>>>>> >                 >
>>>>>> >
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> >
>>>>>> >                 _______________________________________________
>>>>>> >                 OpenStack-dev mailing list
>>>>>> >                 OpenStack-dev at lists.openstack.org
>>>>>> >
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >         _______________________________________________
>>>>>> >         OpenStack-dev mailing list
>>>>>> >         OpenStack-dev at lists.openstack.org
>>>>>> >
>>>>>> >
>>>>>> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=DYApm8uTUC2lxp%2B0qmdN9UhsdAxGdWaIHf5dr1N1tJE%3D%0A&s=ec3a8e21156d1b946db652fac0dab2e2268340aea37bd8c30adbf52fe2f3e8de
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >         _______________________________________________
>>>>>> >         OpenStack-dev mailing list
>>>>>> >         OpenStack-dev at lists.openstack.org
>>>>>> >
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > Stephen Balukoff
>>>>>> > Blue Box Group, LLC
>>>>>> > (800)613-4305 x807
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > OpenStack-dev mailing list
>>>>>> > OpenStack-dev at lists.openstack.org
>>>>>> >
>>>>>> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=SPXsODyQQDMdWpsIy6DIIIQT2Ao%2FZRwloVLU6nM0qzw%3D%0A&s=4e8589eef4ccff3b179e9ff7822030cc792a654c8221b4544877949dd949d3e4
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > OpenStack-dev mailing list
>>>>>> > OpenStack-dev at lists.openstack.org
>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Stephen Balukoff
>>>>> Blue Box Group, LLC
>>>>> (800)613-4305 x807
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Stephen Balukoff
>>> Blue Box Group, LLC
>>> (800)613-4305 x807
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140603/3c3474e8/attachment.html>


More information about the OpenStack-dev mailing list