<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>Yes, I think it would be a great topic for the summit.<br><br>--John<div><br></div></div><div><br>On Jan 14, 2013, at 7:54 AM, Tong Li <<a href="mailto:litong01@us.ibm.com">litong01@us.ibm.com</a>> wrote:<br><br></div><blockquote type="cite"><div>
<p><font size="2" face="sans-serif">John and swifters,</font><br>
<font size="2" face="sans-serif"> I see this problem as a big problem and I think that the sc</font><font size="2" face="Default Sans
Serif">enario described by </font><font size="2" face="Default Sans
Serif">Alejandro is a very common scenario. I am thinking if it is possible to have like two rings (one with the newer extended power, one with the existing ring power), when significant changes made to the hardware, partition, a new ring get started with a command, and new data into Swift will use the new ring, and existing data on the existing ring still available and slowly (not impact the normal use) but automatically moves to the new ring, once the existing ring shrinks to the size zero, then that ring can be removed. The idea is to sort of having two virtual Swift systems working side by side, the migration from existing ring to new ring being done without interrupting the service. Can we put this topic/feature as one to be discussed during the next summit and to be considered as a high priority feature to work on for coming releases?</font><br>
<br>
<font size="2" face="Default Sans
Serif">Thanks.</font><br>
<br>
<font size="2" face="sans-serif">Tong Li<br>
Emerging Technologies & Standards<br>
Building 501/B205<br>
<a href="mailto:litong01@us.ibm.com">litong01@us.ibm.com</a></font><br>
<br>
<graycol.gif><font size="2" color="#424282" face="sans-serif">John Dickinson ---01/11/2013 04:28:47 PM---If effect, this would be a complete replacement of your rings, and that is essentially a whole new c</font><br>
<br>
<font size="1" color="#5F5F5F" face="sans-serif">From: </font><font size="1" face="sans-serif">John Dickinson <<a href="mailto:me@not.mn">me@not.mn</a>></font><br>
<font size="1" color="#5F5F5F" face="sans-serif">To: </font><font size="1" face="sans-serif">Alejandro Comisario <<a href="mailto:alejandro.comisario@mercadolibre.com">alejandro.comisario@mercadolibre.com</a>>, </font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Cc: </font><font size="1" face="sans-serif">"<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>" <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>>, openstack <<a href="mailto:openstack@lists.launchpad.net">openstack@lists.launchpad.net</a>></font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Date: </font><font size="1" face="sans-serif">01/11/2013 04:28 PM</font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Subject: </font><font size="1" face="sans-serif">Re: [Openstack] [SWIFT] Change the partition power to recreate the RING</font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Sent by: </font><font size="1" face="sans-serif"><a href="mailto:openstack-bounces+litong01=us.ibm.com@lists.launchpad.net">openstack-bounces+litong01=us.ibm.com@lists.launchpad.net</a></font><br>
</p><hr width="100%" size="2" align="left" noshade="" style="color:#8091A5; "><br>
<br>
<br>
<tt><font size="2">If effect, this would be a complete replacement of your rings, and that is essentially a whole new cluster. All of the existing data would need to be rehashed into the new ring before it is available.<br>
<br>
There is no process that rehashes the data to ensure that it is still in the correct partition. Replication only ensures that the partitions are on the right drives.<br>
<br>
To change the number of partitions, you will need to GET all of the data from the old ring and PUT it to the new ring. A more complicated, but perhaps more efficient) solution may include something like walking each drive and rehashing+moving the data to the right partition and then letting replication settle it down.<br>
<br>
Either way, 100% of your existing data will need to at least be rehashed (and probably moved). Your CPU (hashing), disks (read+write), RAM (directory walking), and network (replication) may all be limiting factors in how long it will take to do this. Your per-disk free space may also determine what method you choose.<br>
<br>
I would not expect any data loss while doing this, but you will probably have availability issues, depending on the data access patterns.<br>
<br>
I'd like to eventually see something in swift that allows for changing the partition power in existing rings, but that will be hard/tricky/non-trivial.<br>
<br>
Good luck.<br>
<br>
--John<br>
<br>
<br>
On Jan 11, 2013, at 1:17 PM, Alejandro Comisario <<a href="mailto:alejandro.comisario@mercadolibre.com">alejandro.comisario@mercadolibre.com</a>> wrote:<br>
<br>
> Hi guys.<br>
> We've created a swift cluster several months ago, the things is that righ now we cant add hardware and we configured lots of partitions thinking about the final picture of the cluster.<br>
> <br>
> Today each datanodes is having 2500+ partitions per device, and even tuning the background processes ( replicator, auditor & updater ) we really want to try to lower the partition power.<br>
> <br>
> Since its not possible to do that without recreating the ring, we can have the luxury of recreate it with a very lower partition power, and rebalance / deploy the new ring.<br>
> <br>
> The question is, having a working cluster with *existing data* is it possible to do this and wait for the data to move around *without data loss* ???<br>
> If so, it might be true to wait for an improvement in the overall cluster performance ?<br>
> <br>
> We have no problem to have a non working cluster (while moving the data) even for an entire weekend.<br>
> <br>
> Cheers.<br>
> <br>
> <br>
<br>
[attachment "smime.p7s" deleted by Tong Li/Raleigh/IBM] _______________________________________________<br>
Mailing list: </font></tt><tt><font size="2"><a href="https://launchpad.net/~openstack">https://launchpad.net/~openstack</a></font></tt><tt><font size="2"><br>
Post to : <a href="mailto:openstack@lists.launchpad.net">openstack@lists.launchpad.net</a><br>
Unsubscribe : </font></tt><tt><font size="2"><a href="https://launchpad.net/~openstack">https://launchpad.net/~openstack</a></font></tt><tt><font size="2"><br>
More help : </font></tt><tt><font size="2"><a href="https://help.launchpad.net/ListHelp">https://help.launchpad.net/ListHelp</a></font></tt><tt><font size="2"><br>
</font></tt><br>
</div></blockquote></body></html>