<div dir="ltr"><div class="gmail_default" style="font-family:courier new,monospace"><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jul 28, 2014 at 6:11 PM, Scott Devoid <span dir="ltr"><<a href="mailto:devoid@anl.gov" target="_blank">devoid@anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra">For those of you looking for more details, here are the specs for each feature:</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">1. Replication - <a href="https://review.openstack.org/#/c/98308/" target="_blank">https://review.openstack.org/#/c/98308/</a><br>
2. Consistency Groups - <a href="https://review.openstack.org/#/c/96665" target="_blank">https://review.openstack.org/#/c/96665</a></div><div class="gmail_extra"><br></div></div></blockquote><div class="gmail_default" style="font-family:'courier new',monospace">
Yes, thanks very much for linking those Scott. I'm not convinced that those specs are the right way to start which is part of what I was getting at with how "deep" to go in the Cinder API versus how much to leave up to the Admins.</div>
<div class="gmail_default" style="font-family:'courier new',monospace"></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra">
</div><div class="gmail_extra">Replication:</div><div class="gmail_extra">
- Looks like this is specifically for replication between two cinder-volume servers with the same storage driver. Which is ok for me.</div><div class="gmail_extra">- I am concerned with the driver API around promoting replicas (e.g. only calling a function on the secondary) since you are probably in a network partition when this is called.</div>
</div></blockquote><div><br></div><div class="gmail_default" style="font-family:'courier new',monospace">Good point, I think the idea here was for the specific use case of replicating across back-ends that are in the same OpenStack Cluster or maybe DC. In other words providing the ability to recover from a device failure by switching over to the secondary that's included in the same OpenStack cluster/region.</div>
<div class="gmail_default" style="font-family:'courier new',monospace"><br></div><div class="gmail_default" style="font-family:'courier new',monospace">The more interesting case that seems to be coming up however is geo-rep to perhaps a secondary OpenStack deployment, unless maybe I'm misinterpreting some of this?</div>
<div class="gmail_default" style="font-family:'courier new',monospace"></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div class="gmail_extra"><br></div><div class="gmail_extra">Consistency Groups:</div></div></blockquote><div><br></div><div class="gmail_default" style="font-family:'courier new',monospace">I'll try and step through these tomorrow. These are things that we talked about in Atlanta, some things we had answers/solutions for, some we did not. Regardless these are excellent and very applicable points, thanks for raising them here.</div>
<div class="gmail_default" style="font-family:'courier new',monospace"></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra">
- It looks like this proposal layers two concepts in consistency group creation:</div><div class="gmail_extra">
* Marking what volume-types may be within the same consistency group. (Operator issue I argue.)</div><div class="gmail_extra"> * Declaring a new consistency group with volumes in it. (User issue).</div><div class="gmail_extra">
- Can you create a consistency group with volumes from separate regions?</div><div class="gmail_extra">- Can you create a consistency group with volumes from separate store backends? What about different drivers? </div><div class="gmail_extra">
- The current proposal will allow multiple volumes to be snapshotted at around the same time and make use of QEMU quiesce functions to ensure that the filesystems are in a good state when the snapshot has been made. Does a separate consistency group snapshot function provide any stronger consistency guarantees than simply calling snapshot on each volume in a for loop? My impression based on this implementation is no.</div>
<div class="gmail_extra">- Then do we need a separate API, as indicated in the "alternatives" section? This essentially becomes a kind of constraint on deleting volumes.</div></div>
</blockquote></div><br></div></div>