[openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-call)
Trevor Vardeman
trevor.vardeman at RACKSPACE.COM
Thu May 1 16:44:01 UTC 2014
Vijay, I'm following suit: Replies in line :D
On Thu, 2014-05-01 at 16:11 +0000, Vijay Venkatachalam wrote:
> Thanks Trevor. Replies inline!
>
> > -----Original Message-----
> > From: Trevor Vardeman [mailto:trevor.vardeman at RACKSPACE.COM]
> > Sent: Thursday, May 1, 2014 7:30 PM
> > To: openstack-dev at lists.openstack.org
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-
> > call)
> >
> > Vijay,
> >
> > Comments in-line, hope I can clear some of this up for you :)
> >
> > -Trevor
> >
> > On Thu, 2014-05-01 at 13:16 +0000, Vijay Venkatachalam wrote:
> > > I am expecting to be more active on community on the LBaaS front.
> > >
> > > May be reviewing and picking-up a few items to work as well.
> > >
> > > I had a look at the proposal. Seeing Single & Multi-Call approach for
> > > each workflow makes it easy to understand.
> > >
> > > Thanks for the clear documentation, it is welcoming to review :-). I was not
> > allowed to comment on WorkFlow doc, can you enable comments?
> > >
> > > The single-call approach essentially creates the global pool/VIP. Once
> > VIP/Pool is created using single call, are they reusable in multi-call?
> > > For example: Can a pool created for HTTP endpoint/loadbalancer be used
> > in HTTPS endpoint LB where termination occurs as well?
> >
> > From what I remember discussing with my team (being a developer under
> > Jorge's umbrella) There is a 1-M relationship between load balancer and
> > pool. Also, the protocol is specified on the Load Balancer, not the pool,
> > meaning you could expose TCP traffic via one Load Balancer to a pool, and
> > HTTP traffic via another Load Balancer to that same pool.
> > This is easily modified such
> >
>
> Ok. Thanks! Should there be a separate use case for covering this (If it is not already present)?
This is already reflected in at least one use-case. I've been
documenting the "solutions" so to speak to many of the use cases with
regards to the Rackspace API proposal, if you'd like to see some of
those examples (keep in mind they are a WIP) I'll provide a link to them
for you: https://drive.google.com/#folders/0B2r4apUP7uPwRVc2MzQ2MHNpcE0
>
> > >
> > > Also, would it be useful to include PUT as a single call? I see PUT only for
> > POOL not for LB.
> > > A user who started with single-call POST, might like to continue to use the
> > same approach for PUT/update as well.
> >
> > On the fifth page of the document found here:
> > https://docs.google.com/document/d/1mTfkkdnPAd4tWOMZAdwHEx7IuFZ
> > DULjG9bTmWyXe-zo/edit
> > There is a PUT detailed for a Load Balancer. There should be support for PUT
> > on any parent object assuming the fields one would update are not read-
> > only.
> >
>
> My mistake, didn't explain properly.
> I see PUT of loadbalancer containing only loadbalancer properties.
> I was wondering if it makes sense for PUT of LOADBALANCER to contain
> pool+members also. Similar to the POST payload.
For this API proposal, we wanted to enforce the updating of properties
as single requests to the resource, where the POST context includes
creations/attachments of resources to one another. To update a pool/its
members you would use the "/pools" or "/pools/{pool_id}/members"
endpoints accordingly. Also a POST to
"/loadbalancers/{loadbalancer_id}/pools" will create/attach a pool to
the Load Balancer, however PUT would not be supported at this endpoint.
>
> Also, will delete of loadbalancer DELETE the pool/vip, if they are no more
> referenced by another loadbalancer.
>
> Or, they have to be cleaned up separately?
Following the concept of the "Neutron port" in essence "detaching"
rather than removing the references has us leaving the extra pieces
intact but disconnected from a Load Balancer. One would delete the Load
Balancer, and still be able to retrieve the VIP or Pool from their
root-resource references. This would allow someone the ability to
delete a specific Load Balancer, and then create an entirely new one
while referencing the original pool and VIP.
>
> > >
> > > Thanks,
> > > Vijay V.
> > >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list