[openstack-dev] (no subject)

Clay Gerrard clay.gerrard at gmail.com
Fri May 9 17:15:01 UTC 2014


I thought those tracebacks only showed up with old versions of eventlet or
and eventlet_debug = true?

In my experience that normally indicates a client disconnect on a chucked
encoding transfer request (request w/o a content-length).  Do you know if
your clients are using transfer encoding chunked?

Are you seeing the 408 make it's way out to the client?  It wasn't clear to
me if you only see these tracebacks on the object-servers or in the proxy
logs as well?  Perhaps only one of the three disks involved in the PUT are
timing out and the client still gets a successful response?

As the disks fill up replication and auditing is going to consume more disk
resources - you may have to tune the concurrency and rate settings on those
daemons.  If the errors happen consistently you could try running with
background consistency processes temporarily disabled and rule out if
they're causing disk contention on your setup with your config.

-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec <openstack at nemebean.com> wrote:

> This is a development list, and your question sounds more usage-related.
>  Please ask your question on the users list: http://lists.openstack.org/
> cgi-bin/mailman/listinfo/openstack
>
> Thanks.
>
> -Ben
>
>
> On 05/09/2014 06:57 AM, Shyam Prasad N wrote:
>
>> Hi,
>>
>> I have a two node swift cluster receiving continuous traffic (mostly
>> overwrites for existing objects) of 1GB files each.
>>
>> Soon after the traffic started, I'm seeing the following traceback from
>> some transactions...
>> Traceback (most recent call last):
>>    File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 692,
>> in PUT
>>      chunk = next(data_source)
>>    File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 559,
>> in <lambda>
>>      data_source = iter(lambda: reader(self.app.client_chunk_size), '')
>>    File "/home/eightkpc/swift/swift/common/utils.py", line 2362, in read
>>      chunk = self.wsgi_input.read(*args, **kwargs)
>>    File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 147,
>> in read
>>      return self._chunked_read(self.rfile, length)
>>    File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 137,
>> in _chunked_read
>>      self.chunk_length = int(rfile.readline().split(";", 1)[0], 16)
>> ValueError: invalid literal for int() with base 16: '' (txn:
>> tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)
>>
>> Seeing the following errors on storage logs...
>> object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +0000] "PUT
>> /xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A
>> 30396AEF6B53000000007B000000.2.data"
>> 408 - "PUT
>> http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A
>> 30396AEF6B53000000007B000000.2.data"
>> "txf3b4e5f677004474bbd2f-00536c30d1" "proxy-server 12241" 95.6405 "-"
>>
>> It's success sometimes, but mostly 408 errors. I don't see any other
>> logs for the transaction ID. or around these 408 errors in the log
>> files. Is this a disk timeout issue? These are only 1GB files and normal
>> writes to files on these disks are quite fast.
>>
>> The timeouts from the swift proxy files are...
>> root at bulkstore-112:~# grep -R timeout /etc/swift/*
>> /etc/swift/proxy-server.conf:client_timeout = 600
>> /etc/swift/proxy-server.conf:node_timeout = 600
>> /etc/swift/proxy-server.conf:recoverable_node_timeout = 600
>>
>> Can someone help me troubleshoot this issue?
>>
>> --
>> -Shyam
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140509/2c095643/attachment.html>


More information about the OpenStack-dev mailing list