<div dir="ltr">Thanks Rick for a quick reply..... <div><br></div><div><span style="font-size:12.8000001907349px">Are you asking about the rate at which data might come from the object server(s) to the proxy and need to be held on the proxy while it is sent-on to the clients? Yes... the object sever will push faster and therefore accumulation of data in proxy server will be a lot if client is not able to catch up. Shouldn't there be a back pressure? from client to proxy server and then from proxy server to object server?</span><br></div><div><span style="font-size:12.8000001907349px"><br></span></div><div><span style="font-size:12.8000001907349px">something like don't cache more than 10M at a time per client.?</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 10, 2015 at 11:59 AM, Rick Jones <span dir="ltr"><<a href="mailto:rick.jones2@hp.com" target="_blank">rick.jones2@hp.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 03/10/2015 11:45 AM, Omkar Joshi wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
I am using open stack swift server. Now say multiple clients are<br>
requesting 5GB object from server. The rate at which server can push<br>
data into server socket is much more than the rate at which client can<br>
read it from proxy server. Is there configuration / setting which we use<br>
to control / cap the pending data on server side socket? Because<br>
otherwise this will cause server to go out of memory.<br>
</blockquote>
<br></div></div>
The Linux networking stack will have a limit to the size of the SO_SNDBUF, which will limit how much the proxy server code will be able to shove into a given socket at one time. The Linux networking stack may "autotune" that setting if the proxy server code itself isn't making an explicit setsockopt(SO_SNDBUF) call. Such autotuning will be controlled via the sysctl net.ipv4.tcp_wmem<br>
<br>
If the proxy server code does make an explicit setsockopt(SO_SNDBUF) call, that will be limited to no more than what is set in net.core.wmem_max.<br>
<br>
But I am guessing you are asking about something different because virtually every TCP/IP stack going back to the beginning has had bounded socket buffers. Are you asking about something else? Are you asking about the rate at which data might come from the object server(s) to the proxy and need to be held on the proxy while it is sent-on to the clients?<br>
<br>
rick<br>
<br>
______________________________<u></u>______________________________<u></u>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.<u></u>openstack.org?subject:<u></u>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-dev</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div>Thanks,<br></div><div>Omkar</div></div></div>
</div>