<div dir="ltr">Hi Shyam,<div><br></div><div>If I am reading your ring output correctly, it looks like only the devices in node .202 have a weight set, and thus why all of your objects are going to that one node. Â You can update the weight of the other devices, and rebalance, and things should get distributed correctly.</div>
<div><br></div><div>--</div><div>Chuck</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, May 1, 2014 at 5:28 AM, Shyam Prasad N <span dir="ltr"><<a href="mailto:nspmangalore@gmail.com" target="_blank">nspmangalore@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hi,<br><br></div>I created a swift cluster and configured the rings like this...<br><br>swift-ring-builder object.builder create 10 3 1<br>
<br>ubuntu-202:/etc/swift$ swift-ring-builder object.builder <br>
object.builder, build version 12<br>1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 12 devices, 300.00 balance<br>The minimum number of hours before a partition can be reassigned is 1<br>Devices:   id region zone     ip address port replication ip replication port     name weight partitions balance meta<br>
            0      1    1     10.3.0.202 6010     10.3.0.202             6010     xvdb  1.00      1024 300.00 <br>            1      1    1     10.3.0.202 6020     10.3.0.202             6020     xvdc  1.00      1024 300.00 <br>
            2      1    1     10.3.0.202 6030     10.3.0.202             6030     xvde  1.00      1024 300.00 <br>            3      1    2     10.3.0.212 6010     10.3.0.212             6010     xvdb  1.00         0 -100.00 <br>
            4      1    2     10.3.0.212 6020     10.3.0.212             6020     xvdc  1.00         0 -100.00 <br>            5      1    2     10.3.0.212 6030     10.3.0.212             6030     xvde  1.00         0 -100.00 <br>
            6      1    3     10.3.0.222 6010     10.3.0.222             6010     xvdb  1.00         0 -100.00 <br>            7      1    3     10.3.0.222 6020     10.3.0.222             6020     xvdc  1.00         0 -100.00 <br>
            8      1    3     10.3.0.222 6030     10.3.0.222             6030     xvde  1.00         0 -100.00 <br>            9      1    4     10.3.0.232 6010     10.3.0.232             6010     xvdb  1.00         0 -100.00 <br>
           10      1    4     10.3.0.232 6020     10.3.0.232             6020     xvdc  1.00         0 -100.00 <br>           11      1    4     10.3.0.232 6030     10.3.0.232             6030     xvde  1.00         0 -100.00 <br>
<br clear="all"><div><div>Container and account rings have a similar configuration.<br></div><div>Once the rings were created and all the disks were added to the rings like above, I ran rebalance on each ring. (I ran rebalance after adding each of the node above.)<br>
</div><div>Then I immediately scp the rings to all other nodes in the cluster.<br><br></div><div>I now observe that the objects are all going to 10.3.0.202. I don't see the objects being replicated to the other nodes. So much so that 202 is approaching 100% disk usage, while other nodes are almost completely empty.<br>
</div><div>What am I doing wrong? Am I not supposed to run rebalance operation after addition of each disk/node?<br><br></div><div>Thanks in advance for the help.<span class="HOEnZb"><font color="#888888"><br></font></span></div>
<span class="HOEnZb"><font color="#888888"><div><br></div><div>-- <br>-Shyam
</div></font></span></div></div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>