[Openstack] [Swift] PUT requests sensitive to latency?
Hua ZZ Zhang
zhuadl at cn.ibm.com
Wed Jun 25 03:17:20 UTC 2014
I don't think so. Since you are doing PUT object request, it will have
network I/O operations b/w client/proxy server/object server
and disk I/O operations on object server to save the data. The latency of
sending data from client to proxy server should be affected
by their network latency. So the object server might wait a very little
more time to get those data transmitted from proxy server.
Plus object server need to disk I/O to save object data. Ultimately the
network latency can be ignored due to disk operation become
the new bottleneck if your network latency improve a lot.
Single node Swift cluster don't mean you could have only one replica. It
can be more by configuring your install script. You should check it in your
Swift Rings. Even you changed the network and disk chuck size, it is only
used by Swift object server as buffer size to read and write data from
network or disk. It does not change the default TCP/IP packet size which
means you can't send the object data in one packet. The network latency
will have impact on your object PUT request.
-Edward
Shrinand
Javadekar
<shrinand at maginat To
ics.com> Hua ZZ Zhang/China/IBM at IBMCN
cc
2014-06-25 上午 "openstack at lists.openstack.org"
01:14 <openstack at lists.openstack.org>
Subject
Re: [Openstack] [Swift] PUT
requests sensitive to latency?
Communication between proxy and object servers shouldn't be affected by the
latency between the proxy server and the client, right? Also, I'm using a
single node Swift cluster. So there should be only 1 copy of the object
(along with any other I/Os required for the container and accound DBs).
Everything that happens on the Swift side should be the same (if there is
no back-n-forth between the Swift server and client) irrespective of how
much time it takes for communication between the Swift cluster and the
client.
I had made one mistake when experimenting with the network_chunk_size
and disk_chunk_size config options. These are supposed to go to the
object-server.conf and not the proxy-server.conf. I made that change and
restarted all the swift servers. However, I don't see any improvement. My
current object-server.conf looks like this:
http://pastie.org/private/exjiho1cbl80mbruythama
What do you think?
-Shri
On Tue, Jun 24, 2014 at 12:29 AM, Hua ZZ Zhang <zhuadl at cn.ibm.com> wrote:
My guess is the object data need to be transmitted to Swift cluster
before the status code returned.
It can't be returned immediately before 2/3 I/O completed. Otherwise it
is not consistent to tell client
it succeed.
-Edward Zhang
Inactive hide details for Shrinand Javadekar ---2014-06-24 下午
03:12:14---Shrinand Javadekar <shrinand at maginatics.com>Shrinand Javadekar
---2014-06-24 下午 03:12:14---Shrinand Javadekar <shrinand at maginatics.com
>
Shrinand Javadekar <
shrinand at maginatics.com>
To
2014-06-24 下午 03:05
"
openstack at lists.openstac
k.org" <
openstack at lists.openstac
k.org>
cc
Subject
[Openstack] [Swift] PUT
requests sensitive to
latency?
Hi,
I have a single node swift cluster. I measured the time taken to
complete a PUT request that originated from three different client
machines. Each client was writing a single 256K byte object.
Note that the time measured was only the time taken on the Swift
cluster itself. I started the timer after the request was received by
the swift proxy-server process and stopped it when that method
returned an HTTP status to the client. This is not the time on the
client side and therefore *ideally* should not be affected by the
latency between the client and the Swift cluster.
However, it appears that the above is not true. The time required to
complete the request is related to the latency between the client and
swift cluster.
Here are the results:
* Client 1:
Ping time 28ms
PUT request time: ~180 ms
* Client 2:
Ping time 4 ms
PUT request time: ~35 ms
* Client 3:
Ping time 0.04 ms
PUT request time: ~10 ms
Details about the experiment:
* This is a single node Swift installation (not devstack) and uses
SSDs to store metadata as well as data. This is just a test setup. In
production, we won't have SSDs for storing data.
* The above numbers are average of 50 PUT requests.
* The Swift cluster was not being used for anything else during the
experiment.
* The client used was the jclouds library written in java. I had
disable a config option that used the Expect 100-Continue header; i.e.
the requests were not using the Expect 100-Continue header.
* I tried increasing the size of the following options in the
proxy-server.conf and restarting Swift.
disk_chunk_size = 262144
network_chunk_size = 262144
...
[app:proxy-server]
object_chunk_size = 262144
client_chunk_size = 262144
However, this didn't show any improvement in the time required for PUT
requests.
Am I missing anything? Does Swift require an extra round trip from the
client for completing PUT requests? Any ways of avoiding that?
Thanks in advance.
-Shri
_______________________________________________
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack at lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
(See attached file: pic09941.gif)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140625/ccebad86/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140625/ccebad86/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pic08688.gif
Type: image/gif
Size: 1255 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140625/ccebad86/attachment-0001.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140625/ccebad86/attachment-0002.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pic09941.gif
Type: image/gif
Size: 1255 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140625/ccebad86/attachment-0003.gif>
More information about the Openstack
mailing list