[Openstack] (Juno) Multiple rings Swift
Amit Anand
aanand at viimed.com
Tue Dec 16 17:39:05 UTC 2014
Ok cool Ill wait it out see what happens. So now I have another stupid
question - after all is said and done, how many copies of my data will I
have?! What I am aiming for is something like 2 regions and 3 replicas ie,
2 copies of the data in region one and one copy in region 2.
On Tue, Dec 16, 2014 at 12:35 PM, John Dickinson <me at not.mn> wrote:
> That's normal. See the "...or none can be due to min_part_hours". Swift is
> refusing to move more data until the stuff likely currently in flight has
> settled. See
> https://swiftstack.com/blog/2012/04/09/swift-capacity-management/
>
> --John
>
>
>
>
>
>
> > On Dec 16, 2014, at 9:09 AM, Amit Anand <aanand at viimed.com> wrote:
> >
> > Hi John thank you!
> >
> > So I went ahead and added two more storage nodes to the existing rings
> (object, account, container) and tried to rebalance on the controller I got
> this:
> >
> > [root at controller swift]# swift-ring-builder object.builder rebalance
> > Reassigned 1024 (100.00%) partitions. Balance is now 38.80.
> >
> -------------------------------------------------------------------------------
> > NOTE: Balance of 38.80 indicates you should push this
> > ring, wait at least 1 hours, and rebalance/repush.
> >
> -------------------------------------------------------------------------------
> >
> >
> > For all three. So while waiting, I went ahead and added the *.gz files
> and swift.conf to the new nodes and started the Object Storage Services on
> the both the new storage nodes.... Now I am seeing this after I try to
> rebalance after waiting about an hour:
> >
> > [root at controller swift]# swift-ring-builder object.builder rebalance
> > No partitions could be reassigned.
> > Either none need to be or none can be due to min_part_hours [1].
> >
> > Devices 4,5,6,7 are the new ones I added in region 2.
> >
> >
> > [root at controller swift]# swift-ring-builder object.builder
> > object.builder, build version 9
> > 1024 partitions, 3.000000 replicas, 2 regions, 2 zones, 8 devices, 38.80
> balance
> > The minimum number of hours before a partition can be reassigned is 1
> > Devices: id region zone ip address port replication ip
> replication port name weight partitions balance meta
> > 0 1 1 10.7.5.51 6000 10.7.5.51
> 6000 sda3 100.00 501 30.47
> > 1 1 1 10.7.5.51 6000 10.7.5.51
> 6000 sda4 100.00 533 38.80
> > 2 1 1 10.7.5.52 6000 10.7.5.52
> 6000 sda3 100.00 512 33.33
> > 3 1 1 10.7.5.52 6000 10.7.5.52
> 6000 sda4 100.00 502 30.73
> > 4 2 1 10.7.5.53 6000 10.7.5.53
> 6000 sda3 100.00 256 -33.33
> > 5 2 1 10.7.5.53 6000 10.7.5.53
> 6000 sda4 100.00 256 -33.33
> > 6 2 1 10.7.5.54 6000 10.7.5.54
> 6000 sda3 100.00 256 -33.33
> > 7 2 1 10.7.5.54 6000 10.7.5.54
> 6000 sda4 100.00 256 -33.33
> >
> >
> >
> > All three have -33.33 (container, object, account) for their balance.
> Is this normal or did do something incorrect? It doesnt seem to be
> replicating the data to the new nodes (or at least it looks like it
> stopped?) but I am not sure. Would appreciate any insight. Thanks!
> >
> > Amit
> >
> >
> >
> >
> > On Mon, Dec 15, 2014 at 1:49 PM, John Dickinson <me at not.mn> wrote:
> > Sounds like you're looking for a global cluster. You don't need multiple
> rings for this. Swift can support this. When you add a new device to a
> ring, you add it in a different region, and Swift takes care of it for you.
> >
> > Here's some more information:
> >
> >
> http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters
> > https://www.youtube.com/watch?v=mcaTwhP_rPE
> > https://www.youtube.com/watch?v=LpmBRqevuVU
> >
> > https://swiftstack.com/blog/2013/07/02/swift-1-9-0-release/
> >
> https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/
> > https://www.swiftstack.com/docs/admin/cluster_management/regions.html
> >
> >
> >
> > --John
> >
> >
> >
> >
> >
> > > On Dec 15, 2014, at 10:15 AM, Amit Anand <aanand at viimed.com> wrote:
> > >
> > > Hi all,
> > >
> > > I was wondering if anyone knew of any good blog posts and videos that
> could show/explain what I am trying to do. I have Juno setup and it working
> great (thanks to everyone on heres help)! Now, I would like to add two more
> Object Store nodes, but as a separate "datacenter" as it were and replicate
> between my existing ring and the new one. Now, Im not sure exactly what to
> do for the account/container/object rings and how to get them to replicate
> (so if one goes down the other will still continue to serve data). I am
> also going to try and add another storage node just as a backup for
> existing data. Would anyone have any good links to send me I would
> appreciate it!
> > >
> > > Thanks!
> > > Amit Anand
> > > _______________________________________________
> > > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > Post to : openstack at lists.openstack.org
> > > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141216/a99d0514/attachment.html>
More information about the Openstack
mailing list