<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-family:arial,sans-serif;font-size:12.727272033691406px">Migrations from Essex to Grizzly/Havana</span></blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">... </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span style="font-family:arial,sans-serif;font-size:12.727272033691406px">I would find it entirely suitable to upgrade from Essex to Folsom, then migrate from nova-volume to cinder and from nova-network to quantum, then only to upgrade to Grizzly.</span> </blockquote>
<div> </div><div>We're in the same spot, upgrading an Essex deployment to Havana. We decided to forego an incremental upgrade of nova, glance and keystone since it was not clear that we could perform that upgrade without a major service disruption to active VMs. Additionally we had no good way of fully testing the upgrade beforehand.</div>
<div><br></div><div>However, we have a nova-volume service hosting ~ 500TB of volume data (> 200 volumes) using the ZFS driver. We'd like to be able to "carry" these volumes over the upgrade to cinder. Our current strategy is to deploy cinder and nova-volume on the same machine with separate ZFS pools. When a user's ready to upgrade a volume, they hit a button; a script fires off 1) creating a new cinder volume, 2) rename the nova-volume in ZFS to the new cinder one and 3) deleting the old nova-volume record.</div>
<div><br></div><div>That's the plan, at least.</div><div><br></div><div>~ Scott</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Nov 1, 2013 at 3:16 PM, Dean Troyer <span dir="ltr"><<a href="mailto:dtroyer@gmail.com" target="_blank">dtroyer@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="im">On Fri, Nov 1, 2013 at 1:38 PM, Devananda van der Veen <span dir="ltr"><<a href="mailto:devananda.vdv@gmail.com" target="_blank">devananda.vdv@gmail.com</a>></span> wrote:<br>
</div><div class="gmail_extra"><div class="im"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Actually, anyone deploying nova with the "baremetal" driver will face a similar split when Ironic is included in the release. I'm targeting Icehouse, but of course, it's up to the TC when Ironic graduates.<br>
<div class="gmail_extra"><div class="gmail_quote">
<div><br></div><div>This should have a smaller impact than either the neutron or cinder splits, both of which were in widespread use, but I expect we'll see more usage of nova-baremetal crop up now that Havana is released.</div>
</div></div></div></blockquote></div><div><br></div></div><div>I didn't recall in which release baremetal was first a supported option, is it only now in Havana? Is it clear in the docs that this sort of situation is coming in the next release or two? (And no, I haven't gone to look for myself, maybe on the plane tomorrow...)</div>
<div class="im">
<div><br></div><div>dt</div><div><br></div>-- <br><br>Dean Troyer<br><a href="mailto:dtroyer@gmail.com" target="_blank">dtroyer@gmail.com</a><br>
</div></div></div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>