[openstack-dev] [kloudbuster] test LBAAS at scale
Akshay Kumar Sanghai
akshaykumarsanghai at gmail.com
Thu Aug 25 05:08:38 UTC 2016
Hi Alec,
Thanks for your inputs. I would really like to develop this feature. I
don't know if i can handle it, but I would try my best. Can you suggest
some pointers on how to start and how can we discuss in detail about the
tasks?
Thanks
Akshay
On Wed, Aug 24, 2016 at 7:37 AM, Alec Hothan (ahothan) <ahothan at cisco.com>
wrote:
> Hi Akshay,
>
> I suppose you're talking about LBAAS v2?
>
> Adding support for lbaas in kloudbuster will require some amount of work
> which can be kept to a minimum if done properly, this addition would be a
> pretty good way to test lbaas at scale.
> The tricky part is to modify the staging code without breaking the other
> features (multicast and storage) since this staging is specific to HTTP
> scale test.
> The current staging for HTTP scale is based on the following template (I
> show the server side only):
>
> [Router---------[HTTP server VM]*]*
>
> The natural extension for supporting LBAAS is to replace each HTTP server
> with a LB group + N HTTP servers:
>
> [Router----------[LB-------[HTTP server VM]*]*]*
>
> Implementing this would require the following modifications (just a rough
> description of the tasks):
>
> - add an additional config option to specify the number of server VMs
> per LB group (defaults to none/no LB) <easy>
> - if LB is configured, the current config server count would become a
> LB group count
> - the staging code for the HTTP servers needs to be modified to handle
> the case of LB: <medium difficulty - need to know the LBAAS python APIs>
> - instead of creating as many HTTP servers as the server count
> argument, create as many LB groups
> - for each LB group, create the requested HTTP server VMs per group
> and add them to the group
> - floating IP if requested need to apply to the LB port instead of the
> HTTP servers <easy>
> - naturally the teardown code will have to also support cleaning up LB
> resources <easy>
>
>
> - HTTP clients will need to connect to the LB VIP address (instead of
> the HTTP server IP address) <easy>
>
> I can help you go through these individual tasks in detail in the code if
> you feel you can handle that, it's just python coding.
>
>
> The VMs running the HTTP traffic generators are currently always
> associated 1:1 to a server VM. With the above template extension you will
> end up with as many HTTP client VMs as LB groups:
>
> (removed the router for better clarity):
>
> [HTTP client VM-------[LB-------[HTTP server VM]*]*]*
>
> This is not ideal because each HTTP traffic generator can only support a
> relatively low number of connections (in the few thousands) while an HTTP
> server instance can easily support many times this load especially for
> light HTTP traffic (i.e. replies that are very short).
>
> So another improvement (which we had on our roadmap) would be to support
> N:1 mapping:
>
> [[HTTP client VM]*--------LB-------[HTTP server VM]*]*]*
>
> this could be a separate extension.
> Let me know if you'd like to do this and we can help navigate the code.
>
> Thanks
>
> Alec
>
>
>
> From: Akshay Kumar Sanghai <akshaykumarsanghai at gmail.com>
> Date: Tuesday, August 23, 2016 at 2:07 PM
> To: Alec Hothan <ahothan at cisco.com>
> Cc: "Yichen Wang (yicwang)" <yicwang at cisco.com>, "OpenStack Development
> Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org
> >
> Subject: Re: [openstack-dev] [kloudbuster] authorization failed problem
>
> Hi Yichen, Alec,
>
> The kloudbuster project worked perfectly fine for me. Now I want to
> integrate lbaas for scale testing. Can you guys help in how do i achieve
> that? Please include me for any contribution.
>
> Thanks
> Akshay Sanghai
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160824/3eedae91/attachment.html>
More information about the OpenStack-dev
mailing list