[Openstack] [Swift] Proxy server bottleneck

Clay Gerrard clay.gerrard at gmail.com
Mon Jan 13 15:48:17 UTC 2014


It's not synchronous, each request/eventlet co-rotine will yield/trampoline
back to the reactor/hub on every socket operation that raises EWOULDBLOCK.
 In cases where there's a tight long running read/write loop you'll
normally find a call to eventlet.sleep (or in at least one case a queue) to
avoid starvation.

Tuning workers and concurrency has a lot to do with the hardware and some
with the work-load.  The processing rate of an individual proxy server is
mostly cpu bound and depends on if you're doing ssl termination in front of
your proxy.  Request rate throughput is easily scaled by adding more proxy
servers (assuming your client doesn't bottleneck, look to
https://github.com/swiftstack/ssbench for a decently scalable Swift
benchmark suite). Throughput is harder to scale wide because of load
balancing - round-robin dns seems to be a good choice, or ssbench has an
option to benchmark a set of storage urls (list of proxy servers).

Have you read:
http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-tuning

-Clay


On Mon, Jan 13, 2014 at 12:21 AM, Kuo Hugo <tonytkdk at gmail.com> wrote:

> Hi Shrinand,
>
> The concurrency bottleneck of Swift cluster could be various.
> Here's a list :
>
>    - Settings of each workers, workers count, max_clients,
>    threads_per_disk.
>    - Proxy CPU bound
>    - Storage nodes CPU bound
>    - Total Disk IO capacity (includes available memory for xfs caching)
>    - The power of your client machines
>    - Network issue
>
>
> You need to analyze the monitoring data to find the real bottleneck.
> The range of concurrency connections performance depends on the
> deployment.
> Concurrent connections from 150(VMs) to 6K+(physical sever pool). Of
> course that you can setup multiple proxy servers for handling higher
> concurrency as long as your storage nodes can stand for it.
>
> The path of a request in my knowing:
>
> Client --> Proxy-server --> object-server --> container-server (optional
> async) --> object-server --> Proxy-server --> Client --> close connection.
>
>
> Hope it help
> Hugo
>
>
>
>
>
>
>
>
>
> 2014/1/11 Shrinand Javadekar <shrinand at maginatics.com>
>
>> Hi,
>>
>> This question is specific to Openstack Swift. I am trying to understand
>> just how much is the proxy server a bottleneck when multiple clients are
>> concurrently trying to write to a swift cluster. Has anyone done
>> experiments to measure this? It'll be great to see some results.
>>
>> I see that the proxy-server already has a "workers" config option.
>> However, looks like that is the # of threads in one proxy-server process.
>> Does having multiple proxy-servers themselves, running on different nodes
>> (and having some load-balancer in front of them) help in satisfying more
>> concurrent writes? Or will these multiple proxy-servers also get
>> bottlenecked on the account/container/obj server?
>>
>> Also, looking at the code in swift/proxy-server/controllers/obj.py, it
>> seems that each request that the proxy-server sends to the backend servers
>> (account/container/obj) is synchronous. It does not send the request and go
>> back do accept more requests. Is this one of the reasons why write requests
>> can be slow?
>>
>> Thanks in advance.
>> -Shri
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140113/ab9a1a82/attachment.html>


More information about the Openstack mailing list