[openstack-dev] [Nova] Preserving ephemeral block device on rebuild?
Devananda van der Veen
devananda.vdv at gmail.com
Mon Oct 28 15:20:33 UTC 2013
On Mon, Oct 28, 2013 at 6:35 AM, John Griffith
<john.griffith at solidfire.com>wrote:
>
> On Mon, Oct 28, 2013 at 4:49 AM, Robert Collins <robertc at robertcollins.net
> > wrote:
>
>> On 28 October 2013 23:17, John Garbutt <john at johngarbutt.com> wrote:
>> > Is there a reason why you could not just use a Cinder Volume for your
>> > data, in this case?
>>
>> Because this is at the baremetal layer; we want local disks - e.g.
>> some of the stuff we might put in that partition would be cinder lvm
>> volumes for serving out to VM guests. Until we have a cinder
>> implementation that can reference the disks in the same baremetal node
>> an instance is being deployed to we can't use Cinder. We want that
>> implementation, and since it involves nontrivial changes as well as
>> cross-service interactions we don't want to do it in nova-baremetal -
>> doing it in Ironic is the right place. But, we also don't want to
>> block all progress on TripleO until we get that done in Ironic....
>>
>> > While at a first glance, it feels rather wrong, and un-cloudy, I do
>> > see something useful about refreshing the base disk, and leaving the
>> > data disks alone. Prehaps it's something that could be described in
>> > the block device mapping, where you have a "local volume" that you
>> > choose to be non-ephemeral, except for server terminate, or something
>> > like that?
>>
>> Yeah. Except I'd like to just use ephemeral for that, since it meets
>> all of the criteria already, except that it's detached and recreated
>> on rebuild. This isn't a deep opinion though - mainly I don't want to
>> invest a bunch of time building something needlessly different to the
>> existing facility, which cloud-init and other tools already know how
>> to locate and use.
>>
>> -Rob
>>
>>
>> --
>> Robert Collins <rbtcollins at hp.com>
>> Distinguished Technologist
>> HP Converged Cloud
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> Personally I'd rather go the proper route and try to get what you need in
> to Cinder, FWIW the local storage provisioning is something that Vish
> brought up last summit but nobody picked it up. I'd like to make that
> happen early in I. I'm not familiar with the work-around your proposing
> though so maybe it's not an issue, just hate to put in a temporary hack
> that will end up likely taking on a life of its own once it lands.
>
I understand the desire to have something functional in nova-bm so you can
iron out the other parts of the TripleO story that will rely on local data
persisting through rebuild(). I think, as you pointed out, this would be
pretty easy to do within the current nova-bm code. I guess I don't
understand why using cinder volumes to act as the local non-ephemeral
storage with libvirt is not sufficient...
Whether you're testing nova-bm on real or emulated hardware, the rebuild
will be the same, and shouldn't use libvirt. The only reason I've thought
of (in the last few minutes) why you'd need to change libvirt is if you're
testing Heat and rebuild() without nova-bm, and you don't want divergent
code (cinder with libvirt || no cinder with nova-bm). It actually makes
more sense to me to test both code paths because the goal is to use cinder
with Ironic, and so I *wouldn't* want to change the way libvirt does this
today -- I would prefer to build the Ironic/Cinder/Nova-driver code to
follow as many of the same code paths as possible.
But you may have very different reasons that I'm overlooking, and I'm
interested to hear them.
Also, there will be a design session on Cinder for Ironic local storage:
http://summit.openstack.org/cfp/details/350
-Deva
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131028/288ffb02/attachment.html>
More information about the OpenStack-dev
mailing list