[Openstack] (Juno) Multiple rings Swift

John Dickinson me at not.mn
Tue Dec 16 20:27:36 UTC 2014


> On Dec 16, 2014, at 12:06 PM, Amit Anand <aanand at viimed.com> wrote:
> 
> Thanks again John. Out of curiosity, is it possible to see what is where? Lets say I have uploaded a video and wanted to see where the three copies like out of the 4 nodes I have?

You can find this with the swift-get-nodes tool. It does the right lookup and shows where a thing of that name would be placed in Swift. For example on my SAIO dev environment:

$ swift-get-nodes /etc/swift/object.ring.gz AUTH_foo/bar/baz

Account  	AUTH_foo
Container	bar
Object   	baz


Partition	29
Hash     	075ad946c11d8c21e2b97a08b5da8c48

Server:Port Device	127.0.0.1:6010 d1
Server:Port Device	127.0.0.1:6030 d3
Server:Port Device	127.0.0.1:6020 d2
Server:Port Device	127.0.0.1:6040 d4


curl -I -XHEAD "http://127.0.0.1:6010/d1/29/AUTH_foo/bar/baz"
curl -I -XHEAD "http://127.0.0.1:6030/d3/29/AUTH_foo/bar/baz"
curl -I -XHEAD "http://127.0.0.1:6020/d2/29/AUTH_foo/bar/baz"
curl -I -XHEAD "http://127.0.0.1:6040/d4/29/AUTH_foo/bar/baz"


Use your own device location of servers:
such as "export DEVICE=/srv/node"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/d1/objects/29/c48/075ad946c11d8c21e2b97a08b5da8c48"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/d3/objects/29/c48/075ad946c11d8c21e2b97a08b5da8c48"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/d2/objects/29/c48/075ad946c11d8c21e2b97a08b5da8c48"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/d4/objects/29/c48/075ad946c11d8c21e2b97a08b5da8c48"

note: `/srv/node*` is used as default value of `devices`, the real value is set in the config file on each storage node.




> 
> And you guys dont have a free version of Swiftstack available per chance do you :-)


Yes, we do. https://swiftstack.com/customer/signup/


> 
> 
> 
> 
> On Tue, Dec 16, 2014 at 12:48 PM, John Dickinson <me at not.mn> wrote:
> Assuming your regions are pretty close to the same size, that's exactly what you'll get with 3 replicas across 2 regions. Some data will have 2 replicas in region 1 and one in region 2. Other data will have 1 in region 1 and 2 in region 2.
> 
> --John
> 
> 
> 
> 
> > On Dec 16, 2014, at 9:39 AM, Amit Anand <aanand at viimed.com> wrote:
> >
> > Ok cool Ill wait it out see what happens. So now I have another stupid question - after all is said and done, how many copies of my data will I have?! What I am aiming for is something like 2 regions and 3 replicas ie, 2 copies of the data in region one and one copy in region 2.
> >
> > On Tue, Dec 16, 2014 at 12:35 PM, John Dickinson <me at not.mn> wrote:
> > That's normal. See the "...or none can be due to min_part_hours". Swift is refusing to move more data until the stuff likely currently in flight has settled. See https://swiftstack.com/blog/2012/04/09/swift-capacity-management/
> >
> > --John
> >
> >
> >
> >
> >
> >
> > > On Dec 16, 2014, at 9:09 AM, Amit Anand <aanand at viimed.com> wrote:
> > >
> > > Hi John thank you!
> > >
> > > So I went ahead and added two more storage nodes to the existing rings (object, account, container) and tried to rebalance on the controller I got this:
> > >
> > > [root at controller swift]# swift-ring-builder object.builder rebalance
> > > Reassigned 1024 (100.00%) partitions. Balance is now 38.80.
> > > -------------------------------------------------------------------------------
> > > NOTE: Balance of 38.80 indicates you should push this
> > >       ring, wait at least 1 hours, and rebalance/repush.
> > > -------------------------------------------------------------------------------
> > >
> > >
> > > For all three. So while waiting, I went ahead and added the *.gz files and swift.conf to the new nodes and started the Object Storage Services on the both the new storage nodes.... Now I am seeing this after I try to rebalance after waiting about an hour:
> > >
> > > [root at controller swift]# swift-ring-builder object.builder rebalance
> > > No partitions could be reassigned.
> > > Either none need to be or none can be due to min_part_hours [1].
> > >
> > > Devices 4,5,6,7 are the new ones I added in region 2.
> > >
> > >
> > > [root at controller swift]#  swift-ring-builder object.builder
> > > object.builder, build version 9
> > > 1024 partitions, 3.000000 replicas, 2 regions, 2 zones, 8 devices, 38.80 balance
> > > The minimum number of hours before a partition can be reassigned is 1
> > > Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
> > >              0       1     1       10.7.5.51  6000       10.7.5.51              6000      sda3 100.00        501   30.47
> > >              1       1     1       10.7.5.51  6000       10.7.5.51              6000      sda4 100.00        533   38.80
> > >              2       1     1       10.7.5.52  6000       10.7.5.52              6000      sda3 100.00        512   33.33
> > >              3       1     1       10.7.5.52  6000       10.7.5.52              6000      sda4 100.00        502   30.73
> > >              4       2     1       10.7.5.53  6000       10.7.5.53              6000      sda3 100.00        256  -33.33
> > >              5       2     1       10.7.5.53  6000       10.7.5.53              6000      sda4 100.00        256  -33.33
> > >              6       2     1       10.7.5.54  6000       10.7.5.54              6000      sda3 100.00        256  -33.33
> > >              7       2     1       10.7.5.54  6000       10.7.5.54              6000      sda4 100.00        256  -33.33
> > >
> > >
> > >
> > > All three have -33.33 (container, object, account)  for their balance. Is this normal or did do something incorrect? It doesnt seem to be replicating the data to the new nodes (or at least it looks like it stopped?)  but I am not sure. Would appreciate any insight. Thanks!
> > >
> > > Amit
> > >
> > >
> > >
> > >
> > > On Mon, Dec 15, 2014 at 1:49 PM, John Dickinson <me at not.mn> wrote:
> > > Sounds like you're looking for a global cluster. You don't need multiple rings for this. Swift can support this. When you add a new device to a ring, you add it in a different region, and Swift takes care of it for you.
> > >
> > > Here's some more information:
> > >
> > > http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters
> > > https://www.youtube.com/watch?v=mcaTwhP_rPE
> > > https://www.youtube.com/watch?v=LpmBRqevuVU
> > >
> > > https://swiftstack.com/blog/2013/07/02/swift-1-9-0-release/
> > > https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/
> > > https://www.swiftstack.com/docs/admin/cluster_management/regions.html
> > >
> > >
> > >
> > > --John
> > >
> > >
> > >
> > >
> > >
> > > > On Dec 15, 2014, at 10:15 AM, Amit Anand <aanand at viimed.com> wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I was wondering if anyone knew of any good blog posts and videos that could show/explain what I am trying to do. I have Juno setup and it working great (thanks to everyone on heres help)! Now, I would like to add two more Object Store nodes, but as a separate "datacenter" as it were and replicate between my existing ring and the new one. Now, Im not sure exactly what to do for the account/container/object rings and how to get them to replicate (so if one goes down the other will still continue to serve data). I am also going to try and add another storage node just as a backup for existing data. Would anyone have any good links to send me I would appreciate it!
> > > >
> > > > Thanks!
> > > > Amit Anand
> > > > _______________________________________________
> > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > Post to     : openstack at lists.openstack.org
> > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >
> > >
> >
> >
> 
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141216/65d01ac5/attachment.sig>


More information about the Openstack mailing list