[Openstack] [Swift] Container DB update after object PUT

Shao, Minglong Minglong.Shao at netapp.com
Sun Dec 1 00:49:35 UTC 2013


Thanks for the clarification!

Best,
-Minglong

On 11/29/13, 4:26 PM, "Samuel Merritt" <sam at swiftstack.com<mailto:sam at swiftstack.com>> wrote:

On 11/27/13 9:24 PM, Shao, Minglong wrote:
Thanks for your reply!
I understand the three PUTs by the proxy server and how the replicator
works.

What I don’t understand is the update of the container DB. The update is
sent by individual object servers which don’t know whether a PUT (from
the client’s perspective) succeeds.
Consider the following scenario:

  1. Proxy server sends three requests to three object servers.
  2. One object server writes the object successfully, sends an update to
     the container DB and an “OK” reply to the proxy server. But the
     other two fail, so they send “failed” to the proxy server.
  3. The proxy server sends back “failed” to the client because it
     doesn’t meet the quorum. But the container DB still gets the update
     to insert an entry of this object.

The word "failed" is a bit nebulous here, and I think that's the source
of the confusion.

The proxy server doesn't send "failed" to the client; it sends back an
HTTP response with status code 503. This sounds like nitpicking, but
it's not. A 503 certainly doesn't indicate success, but neither does it
indicate that the requested operation failed. It really doesn't tell the
client anything except that something went wrong somewhere; whether that
constitutes a failure or not depends on the details.

It's possible that the object servers finished the upload successfully,
but the final stage of the upload took too long, and the proxy server
timed out the requests and sent a 503 to the client. In this case, one
could have all 3 (or N, really) replicas of the object successfully
written to disk, but the client still got a 503. Is that a failure?

It's possible that one object server finished the upload successfully,
but the others did not. In this case, the client will get a 503 due to
lack of quorum. However, let's say that the one, lonely copy of the
object is replicated before its disk malfunctions, and so the cluster
ends up with the full 3 (or N) replicas. Is that a failure?

It's possible that (again) one object server finished the upload
successfully, but the others did not. In this case, the client will get
a 503 due to lack of quorum. However, let's say that the one, lonely
copy of the object dies a horrible death due to disk malfunction, and so
the container listing contains an object that will always 404. Is that a
failure?

It's possible that the client is uploading an object and sends the last
bytes of the request body to the proxy, and before those critical last
bytes are sent to the object servers, the proxy's internal NIC fails,
the object-server requests all time out, and the client replies with a
503. Is that a failure? (Yes, this one definitely is a failure.)

I hope this helps clarify that a 503 indicates neither upload success
nor upload failure, but simply that something went wrong somewhere, and
that the object may exist and have full durability, may exist with poor
durability, or may not exist at all.

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131201/7b9ef89a/attachment.html>


More information about the Openstack mailing list