[Openstack] [Swift] Container DB update after object PUT
Kuo Hugo
tonytkdk at gmail.com
Fri Nov 29 08:07:24 UTC 2013
There may have someone can provide a explanation in detail.
Seems there're various cases. I just checked the code of Proxy's object
controller.
In case of connection failed to a target object device, seems the request
will be stop at
https://github.com/openstack/swift/blob/master/swift/proxy/controllers/obj.py#L1091.
I'm awaiting for more accurate answers.
+Hugo Kuo+
(+886) 935004793
SwiftStack Inc.
2013/11/29 Shao, Minglong <Minglong.Shao at netapp.com>
> Thanks Hugo for your reply!
>
> It’s still very puzzling. Each object server handles the request
> independently.
> How can the proxy interrupt the PUT request to the good device?
> The line of code you referred to does checking before the object server
> actually writes anything.
> The exception is raised only if the device is not available.
> By the time the proxy knows that the object PUT is a failure, the good
> object server has most likely passed that checking.
> Besides, I don’t see any proxy code that sends “interrupt” to object
> servers after it checks quorum.
>
> Regards,
> -Minglong
>
> From: Kuo Hugo <tonytkdk at gmail.com>
> Date: Thursday, November 28, 2013 at 5:46 PM
>
> To: Minglong Shao <minglong.shao at netapp.com>
> Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
> Subject: Re: [Openstack] [Swift] Container DB update after object PUT
>
> Proxy will interrupt the PUT request to the only good device.
>
> Object server returned error to Proxy
> https://github.com/openstack/swift/blob/master/swift/obj/server.py#L387
>
>
> account-server 192.168.56.10 - - "HEAD /d5/390/AUTH_ss" 204 -
> "txdc2f2cc49b7b447e91f93-0052970cca" "HEAD http://192.168.56.10/v1/AUTH_ss"
> "proxy-server 23471" 0.0012 ""
> container-server 192.168.56.10 - - "HEAD /d3/408/AUTH_ss/con1" 204 -
> "txdc2f2cc49b7b447e91f93-0052970cca" "HEAD
> http://192.168.56.10/v1/AUTH_ss/con1" "proxy-server 23471" 0.0011
> object-server 192.168.56.10 - - "PUT /d2/293/AUTH_ss/con1/8" 507 - "PUT
> http://192.168.56.10/v1/AUTH_ss/con1/8"
> "txdc2f2cc49b7b447e91f93-0052970cca" "proxy-server 23471" 0.0004
> proxy-server ERROR Insufficient Storage 192.168.56.10:6000/d2 (txn:
> txdc2f2cc49b7b447e91f93-0052970cca)
> object-server 192.168.56.10 - - "PUT /d1/293/AUTH_ss/con1/8" 507 - "PUT
> http://192.168.56.10/v1/AUTH_ss/con1/8"
> "txdc2f2cc49b7b447e91f93-0052970cca" "proxy-server 23471" 0.0002
> proxy-server ERROR Insufficient Storage 192.168.56.10:6000/d1 (txn:
> txdc2f2cc49b7b447e91f93-0052970cca)
> proxy-server Object PUT returning 503, 1/2 required connections (txn:
> txdc2f2cc49b7b447e91f93-0052970cca)
> object-server 192.168.56.10 - - "PUT /d0/293/AUTH_ss/con1/8" 499 - "PUT
> http://192.168.56.10/v1/AUTH_ss/con1/8"
> "txdc2f2cc49b7b447e91f93-0052970cca" "proxy-server 23471" 0.0050
> proxy-server 192.168.56.10 192.168.56.10 28/Nov/2013/09/28/42 PUT
> /v1/AUTH_ss/con1/8 HTTP/1.0 503 -
> curl/7.22.0%20%28x86_64-pc-linux-gnu%29%20libcurl/7.22.0%20OpenSSL/1.0.1%20zlib/
> 1.2.3.4%20libidn/1.23%20librtmp/2.3
> ss%2CAUTH_tk8caa909045a940aca3ff980ffa507961 - 118 -
> txdc2f2cc49b7b447e91f93-0052970cca - 0.0145 - -
>
>
>
>
> +Hugo Kuo+
> (+886) 935004793
> SwiftStack Inc.
>
>
> 2013/11/28 Shao, Minglong <Minglong.Shao at netapp.com>
>
>> Thanks for your reply!
>> I understand the three PUTs by the proxy server and how the replicator
>> works.
>>
>> What I don’t understand is the update of the container DB. The update
>> is sent by individual object servers which don’t know whether a PUT (from
>> the client’s perspective) succeeds.
>> Consider the following scenario:
>>
>> 1. Proxy server sends three requests to three object servers.
>> 2. One object server writes the object successfully, sends an update
>> to the container DB and an “OK” reply to the proxy server. But the other
>> two fail, so they send “failed” to the proxy server.
>> 3. The proxy server sends back “failed” to the client because it
>> doesn’t meet the quorum. But the container DB still gets the update to
>> insert an entry of this object.
>>
>> I must have missed something. Thanks for your help!
>>
>> From: Kuo Hugo <tonytkdk at gmail.com>
>> Date: Thursday, November 28, 2013 at 12:31 PM
>> To: Minglong Shao <minglong.shao at netapp.com>
>> Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>> Subject: Re: [Openstack] [Swift] Container DB update after object PUT
>>
>> Proxy sends requests to *three* replicas's object servers
>> simultaneously.
>> A successful PUT request depends on at least 1/2 replicas were success.
>> Or Proxy will return PUT failed to user.
>>
>> If 1/3 replica is not there, the replicator will handle it later.
>> It won't have any problem with container DB.
>>
>>
>>
>>
>> +Hugo Kuo+
>> (+886) 935004793
>> SwiftStack Inc.
>>
>>
>> 2013/11/28 Shao, Minglong <Minglong.Shao at netapp.com>
>>
>>> Hi there,
>>>
>>> After an object server writes an object in the local file system, it
>>> updates the container DB asynchronously (send a message to insert an entry
>>> in the object table). But the object server doesn’t really know whether the
>>> object PUT is considered successful or not because the other two replicas
>>> could fail. In this case, the container DB could have an entry for an
>>> object which is not successfully PUT. Can someone shed some light on this?
>>> Am I missing something?
>>>
>>> Many thanks!
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131129/1798ed47/attachment.html>
More information about the Openstack
mailing list