[openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

Brandon Logan brandon.logan at RACKSPACE.COM
Fri May 30 05:17:57 UTC 2014


Hi Bo,
Sorry, I forgot to respond but yes what Stephen said lol :)
________________________________
From: Stephen Balukoff [sbalukoff at bluebox.net]
Sent: Thursday, May 29, 2014 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

Hi Bo--

Haproxy is able to have IPv4 front-ends with IPv6 back-ends (and visa versa) because it actually initiates a separate TCP connection between the front end client and the back-end server. The front-end thinks haproxy is the server, and the back-end thinks haproxy is the client. In practice, therefore, its totally possible to have an IPv6 front-end and IPv4 back-end with haproxy (for both http and generic TCP service types).

I think this is similarly true for vendor appliances that are capable of doing IPv6, and are also initiating new TCP connections from the appliance to the back-end.

Obviously, the above won't work if your load balancer implementation is doing something "transparent" on the network layer like LVM load balancing.

Stephen



On Wed, May 28, 2014 at 9:14 PM, Bo Lin <linb at vmware.com<mailto:linb at vmware.com>> wrote:
Hi Brandon,
I have one question. If we support LoadBalancer to Listener relationship M:N, then one listener with IPV4 service members backend may be shared by a loadbalancer instance with IPV6 forntend. Does it mean we also need to provide IPV6 - IPV4 port forwarding functions in LBaaS services products? Does iptables or most LBaaS services products such as haproxy or so on provide such function? Or I am just wrong in some technique details on these LBaaS products.

Thanks!
________________________________
From: "Vijay B" <os.vbvs at gmail.com<mailto:os.vbvs at gmail.com>>

To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Sent: Thursday, May 29, 2014 6:18:42 AM

Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

Hi Brandon!

Please see inline..




On Wed, May 28, 2014 at 12:01 PM, Brandon Logan <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
Hi Vijay,

On Tue, 2014-05-27 at 16:27 -0700, Vijay B wrote:
> Hi Brandon,
>
>
> The current reviews of the schema itself are absolutely valid and
> necessary, and must go on. However, the place of implementation of
> this schema needs to be clarified. Rather than make any changes
> whatsoever to the existing neutron db schema for LBaaS, this new db
> schema outlined needs to be implemented for a separate LBaaS core
> service.
>
Are you suggesting a separate lbaas database from the neutron database?
If not, then I could use some clarification. If so, I'd advocate against
that right now because there's just too many things that would need to
be changed.  Later, when LBaaS becomes its own service then yeah that
will need to happen.

v> Ok, so as I understand it, in this scheme, there is no new schema or db, there will be a new set of tables resident in neutron_db schema itself, alongside legacy lbaas tables. Let's consider a rough view of the implementation.

Layer 1 - We'll have a new lbaas v3.0 api in neutron, with the current lbaas service plugin having to support it in addition to the legacy lbaas extensions that it already supports. We'll need to put in new code anyway that will process the v3.0 lbaas api no matter what our approach is.
Layer 2 - Management code that will take care of updating the db with entities in pending_create, then invoking the right provider driver, choosing/scheduling the plugin drivers or the agent drivers, invoking them, getting the results, and updating the db.
Layer 3 - The drivers themselves (either plugin drivers (like the HAProxy namespace driver/Netscaler) or plugin drivers + agent drivers).

While having the new tables sit alongside the legacy tables is one way to implement the changes, I don't see how this approach leads to a lesser amount of changes overall. Layer 2 above will be the major place where changes can be complicated. Also, it will be confusing to have two sets of lbaas tables in the same schema.

I don't want a separate lbaas database under neutron, and neither do I want it within neutron. I'm not suggesting that we create a db schema alone, I'm saying we must build it with the new LBaaS service (just like neutron itself when it got created). If we don't do this now, we'll end up reimplementing the logic implemented in neutron for the new lbaas v3.0 API all over again for the new core LBaaS service. We'd rather do it in the new one in one effort.

I could be missing some constraints that drive taking the former approach - please help me understand those - I don't want to be discounting any one approach without thorough consideration. Right now, it looks to me like this approach is being taken only to accommodate the HAProxy namespace driver. Really that is the only driver which seems to be very intertwined with neutron in the way it uses namespaces.


>
> What we should be providing in neutron is a switch (a global conf)
> that can be set to instruct neutron to do one of two things:
>
>
> 1. Use the existing neutron LBaaS API, with the backend being the
> existing neutron LBaaS db schema. This is the status quo.
> 2. Use the existing neutron LBaaS API, with the backend being the new
> LBaaS service. This will invoke calls not to neutron's current LBaaS
> code at all, rather, it will call into a new set of proxy "backend"
> code in neutron that will translate the older LBaaS API calls into the
> newer REST calls serviced by the new LBaaS service, which will write
> down these details accordingly in its new db schema. As long as the
> request and response objects to legacy neutron LBaaS calls are
> preserved as is, there should be no issues. Writing unit tests should
> also be comparatively more straightforward, and old functional tests
> can be retained, and newer ones will not clash with legacy code.
> Legacy code itself will work, having not been touched at all. The
> blueprint for the db schema that you have referenced
> (https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst<https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=DYApm8uTUC2lxp%2B0qmdN9UhsdAxGdWaIHf5dr1N1tJE%3D%0A&s=800daa0d65da46080d85d59d01555839dbdc8e97558ba7029f1f440d993bc8c3>) should be implemented for this new LBaaS service, post reviews.
>
I think the point of this blueprint is to get the API and object model
less confusing for the Neutron LBaaS service plugin.  I think it's too
early to create an LBaaS service because we have not yet cleaned up the
tight integration points between Neutron LBaaS and LBaaS.  Creating a
new service would require only API interactions between Neutron and this
LBaaS service, which currently is not possible due to these tight
integration points.

v> The tight integration points between LBaaS and neutron that I see are:

1. The usage of namespaces.
2.  L2 and L3 plumbing within the namespaces and tracking them in the neutron and lbaas tables,
3. Plugin driver and agent driver scheduling framework/mechanism for LB drivers.
4. The way drivers directly update the neutron db, which I think makes for a lack of clear functional demarcation.

Regardless of how we use the new API and db model, will namespaces be used? If they still need to be supported, the tight integration isn't going to go anywhere.

This is why I think it will be best to keep the legacy drivers within neutron, and not give an option to newer deployments to use that concurrently with the new lbaas core service. The changes will be lesser this way because we won't touch legacy code.

While I fully understand that we're trying to change the way we look at the lbaas deployments, and the db object model is an effort towards that, we need to ensure that the execution is kept elegant as well. For drivers for lb solutions like f5 or Netscaler, these pain points can be done away with because they do their own network provisioning and we keep track of them only to clean up (especially for virtual appliance solutions).

It will however mean that we'll have the additional task of implementing the new core service before we can use the new db object model. I say we should just go for that effort and make it happen.



>
> The third option would be to turn off neutron LBaaS API, and use the
> new LBaaS core service directly, but for this we can simply disable
> neutron lbaas, and don't need a config parameter in neutron.
>
>
> Implementing this db schema within neutron instead will be not just
> complicated, but a huge effort that will go waste in future once the
> new LBaaS service is implemented. Also, migration will unnecessarily
> retain the same steps needed to go from legacy neutron LBaaS to the
> new core LBaaS service in this approach (twice, in succession) in case
> for any reason the version goes from legacy neutron LBaaS -> new
> neutron LBaaS -> new LBaaS core service.
I totally agree that this is technical debt, but I believe it is the
best option we have right now since LBaaS needs to live in the Neutron
code and process because of the tight integration points.  Since this
object model refactor has been slated for Juno, and these tight
integration points may or may not be cleaned up by Juno, staying within
Neutron seems to be the best option right now.

v> As I described above, I think the tight integration points are best kept in legacy code and not carried over to the new implementation. The cleanest way to do it would be to clearly demarcate neutron related operations (L2/L3) from LBaaS. But I am keen to get your views on what the difficult integration points are so that I get a better understanding of the motivations behind keeping the new tables in neutron.


Regards,
Vijay


>
>
> Going forward, the legacy neutron LBaaS API can be deprecated, and the
> new API that directly contacts the new LBaaS core service can be used.
>
>
> We have discussed the above architecture previously, but outside of
> the ML, and a draft of the blueprint for this new LBaaS core service
> is underway, and is a collation of all the discussions among a large
> number of LBaaS engineers including yourself during the summit - I
> will be posting it for review within a couple of days, as planned.
>
>
>
>
> Regards,
> Vijay
>
>
> On Tue, May 27, 2014 at 12:32 PM, Brandon Logan
> <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>         Referencing this blueprint:
>         https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst<https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=DYApm8uTUC2lxp%2B0qmdN9UhsdAxGdWaIHf5dr1N1tJE%3D%0A&s=800daa0d65da46080d85d59d01555839dbdc8e97558ba7029f1f440d993bc8c3>
>
>         Anyone who has suggestions to possible issues or can answer
>         some of
>         these questions please respond.
>
>
>         1. LoadBalancer to Listener relationship M:N vs 1:N
>         The main reason we went with the M:N was so IPv6 could use the
>         same
>         listener as IPv4.  However this can be accomplished by the
>         user just
>         creating a second listener and pool with the same
>         configuration.  This
>         will end up being a bad user experience when the listener and
>         pool
>         configuration starts getting complex (adding in TLS, health
>         monitors,
>         SNI, etc). A good reason to not do the M:N is because the
>         logic on might
>         get complex when dealing with status.  I'd like to get
>         people's opinions
>         on this on whether we should do M:N or just 1:N.  Another
>         option, is to
>         just implement 1:N right now and later implement the M:N in
>         another
>         blueprint if it is decided that the user experience suffers
>         greatly.
>
>         My opinion: I like the idea of leaving it to another blueprint
>         to
>         implement.  However, we would need to watch out for any major
>         architecture changes in the time itis not implemented that
>         could make
>         this more difficult than what it needs to be.
>
>         2. Pool to Health Monitor relationship 1:N vs 1:1
>         Currently, I believe this is 1:N however it was suggested to
>         deprecate
>         this in favor of 1:1 by Susanne and Kyle agreed.  Are there
>         any
>         objections to channging to 1:1?
>
>         My opinion: I'm for 1:1 as long as there aren't any major
>         reasons why
>         there needs to be 1:N.
>
>         3. Does the Pool object need a status field now that it is a
>         pure
>         logical object?
>
>         My opinion: I don't think it needs the status field.  I think
>         the
>         LoadBalancer object may be the only thing that needs a status,
>         other
>         than the pool members for health monitoring.  I might be
>         corrected on
>         this though.
>
>         _______________________________________________
>         OpenStack-dev mailing list
>         OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=DYApm8uTUC2lxp%2B0qmdN9UhsdAxGdWaIHf5dr1N1tJE%3D%0A&s=ec3a8e21156d1b946db652fac0dab2e2268340aea37bd8c30adbf52fe2f3e8de>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=DYApm8uTUC2lxp%2B0qmdN9UhsdAxGdWaIHf5dr1N1tJE%3D%0A&s=ec3a8e21156d1b946db652fac0dab2e2268340aea37bd8c30adbf52fe2f3e8de>

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=DYApm8uTUC2lxp%2B0qmdN9UhsdAxGdWaIHf5dr1N1tJE%3D%0A&s=ec3a8e21156d1b946db652fac0dab2e2268340aea37bd8c30adbf52fe2f3e8de>


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0A&m=DYApm8uTUC2lxp%2B0qmdN9UhsdAxGdWaIHf5dr1N1tJE%3D%0A&s=ec3a8e21156d1b946db652fac0dab2e2268340aea37bd8c30adbf52fe2f3e8de


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140530/019ba8d7/attachment.html>


More information about the OpenStack-dev mailing list