[openstack-dev] [swift] On Object placement

Halterman, Jonathan jonathan.halterman at hp.com
Tue Mar 3 16:33:21 UTC 2015


Hi Christian,

Sorry for the slow response. I was looking into the feasibility of your
suggestion for Sahara in particular and it took a bit.

On 2/19/15, 2:46 AM, "Christian Schwede" <christian.schwede at enovance.com>
wrote:

>Hello Jonathan,
>
>On 18.02.15 18:13, Halterman, Jonathan wrote:
>>>> 1. Swift should allow authorized services to place a given number
>>>> of object replicas onto a particular rack, and onto separate
>>>> racks.
>>> 
>>> This is already possible if you use zones and regions in your ring
>>> files. For example, if you have 2 racks, you could assign one zone
>>> to each of them and Swift places at least one replica on each
>>> rack.
>>> 
>>> Because Swift takes care of the device weight you could also ensure
>>> that a specific rack gets two copies, and another rack only one.
>> 
>> Presumably a deployment would/should match the DC layout, where
>> racks could correspond to Azs.
>
>yes, that makes a lot of sense (to assign zones to racks), because in
>this case you can ensure that there aren't multiple replicas stored
>within the same rack. You can still access your data if a rack goes down
>(power, network, maintenance).
>
>>> However, this is only true as long as all primary nodes are
>>> accessible. If Swift stores data on a handoff node this data might
>>> be written to a different node first, and moved to the primary node
>>> later on.
>>> 
>>> Note that placing objects on other than the primary nodes (for
>>> example using an authorized service you described) will only store
>>> the data on these nodes until the replicator moves the data to the
>>> primary nodes described by the ring. As far as I can see there is
>>> no way to ensure that an authorized service can decide where to
>>> place data, and that this data stays on the selected nodes. That
>>> would require a fundamental change within Swift.
>> 
>> So - how can we influence where data is stored? In terms of
>> placement based on a hash ring, I¹m thinking of perhaps restricting
>> the placement of an object to a subset of the ring based on a zone.
>> We can still hash an object somewhere on the ring, for the purposes
>> of controlling locality, we just want it to be within (or without) a
>> particular zone. Any ideas?
>
>You can't (at least not from the client side). The ring determines the
>placement and if you have more zones (or regions) than replicas you
>can't ensure an object replica is stored within a determined rack. Even
>if you store it on a handoff node it will be moved to the primary node
>sooner or later.
>Determining that an object is stored in a specific zone is not possible
>with the current architecture; you can only discover in which zone it
>will be placed finally (based on the ring).
>
>What you could do (especially if you have more racks than replicas) is
>to use storage policies and only assign three racks to each policy, and
>splitting them into three zones (if you store three replicas).
>For example, let's assume you have 5 racks, then you create 5 storage
>policies (SP) with the following assignment:
>
>			Rack
>SP	1	2	3	4	5
>0	x	x	x
>1		x	x	x
>2			x	x	x
>3	x			x	x
>4	x	x			x
>
>Doing this you can ensure the following:
>- Data is distributed somehow evenly across the cluster (if you use the
>storage policies also evenly distributed)
>- From a given SP you can ensure that a replica is stored in a specific
>rack; and because a SP is assigned to a container you can determine the
>SP based on the container metadata (name SP0 "rack_1_2_3" and so on to
>make it even more simpler for the application to determine the racks).
>
>That could help in your case?

While this wouldn’t give us all the control we need (2 replicas on 1 rack,
1 replica on another rack), ensuring at least 1 copy winds up on a
particular rack is part way there. With the way that Swift’s placement
works, are the other replicas likely to end up on different racks?

Where this might not work is for services that need to control rack
locality and allow users to select the containers that data is placed in.
This is currently the case with Sahara.

>
>
>>>> 2. Swift should allow authorized services and administrators to
>>>> learn which racks an object resides on, along with endpoints.
>>> 
>>> You already mentioned the endpoint middleware, though it is
>>> currently not protected and unauthenticated access is allowed if
>>> enabled.
>> 
>> This is good to know. We still need to learn which rack an object
>> resides on though. This information is important in determining
>> whether a swift object resides on the same rack as a VM.
>
>Well, that information is available using the /endpoint middleware? You
>know the server IPs in a rack, and compare that to the output from the
>endpoint middleware.

We don’t actually know the server IPs in a rack though, and collecting and
maintaining this host->rack information is something we’d like to avoid
having various individual services do. Currently Sahara does collect this
information, but it’s a bit awkward for a downstream/platform service to
be doing this and the information is not reusable by other services.

>From the perspective of any given service that might use Swift, we simply
want to learn which racks an object resides on so we can do reads from the
closest rack. Effectively what this means though is that Swift would need
to maintain some internal host->rack association information and make it
available somehow (perhaps via /endpoints).

>
>>> You could easily add another small middleware in the pipeline to
>>> check authentication and grant or deny access to /endpoints based
>>> on the authentication. You can also get the node (and disk) if you
>>> have access to the ring files. There is a tool included in the
>>> Swift source code called "swift-get-nodes"; however you could
>>> simply reuse existing code to include it in your projects.
>> 
>> I¹m guessing this would not work for in cloud services?
>
>Do you mean public cloud services? You always need access to the storage
>servers itself to access objects directly, and these should be
>accessible only by an internal, protected network (and only the proxy
>servers should have access to that network).

Yep - we have services that are running on VMs that we’d like to have
access to learn which racks an object resides on, whether that’s via
/endpoints or some other mechanism. Since they are not running alongside
Swift though, they wouldn’t be able to use the middleware (AFAIK). Any
suggestions?

>
>Christian
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5517 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150303/79b55cbb/attachment.bin>


More information about the OpenStack-dev mailing list