[openstack-dev] [TripleO] [Ironic] [Cinder] "Baremetal volumes" -- how to model direct attached storage

Duncan Thomas duncan.thomas at gmail.com
Thu Nov 13 12:27:08 UTC 2014


The problem with considering it a cinder volume rather than a nova
ephemeral volume is that it is just as leaky a set of semantics -
cinder volumes can be detached, attached elsewhere, snapshotted,
backed up, etc - a directly connected bare metal drive will be able to
do none of these things.

That said, the upcoming cinder-agent code might be of use - it is
designed to provide discovery and an API around local storage - but
mapping bare metal drives as cinder volumes is really no better than
mapping them as nova ephemeral drives - in both cases they don't match
the semantics. I'd rather not bend the cinder semantics out of shape
to clean up the nova ones.



On 13 November 2014 00:30, Clint Byrum <clint at fewbar.com> wrote:
> Each summit since we created "preserve ephemeral" mode in Nova, I have
> some conversations where at least one person's brain breaks for a
> second. There isn't always alcohol involved before, there almost
> certainly is always a drink needed after. The very term is vexing, and I
> think we have done ourselves a disservice to have it, even if it was the
> best option at the time.
>
> To be clear, in TripleO, we need a way to keep the data on a local
> direct attached storage device while deploying a new image to the box.
> If we were on VMs, we'd attach volumes, and just deploy new VMs and move
> the volume over. If we had a SAN, we'd just move the LUN's. But at some
> point when you deploy a cloud you're holding data that is expensive to
> replicate all at once, and so you'd rather just keep using the same
> server instead of trying to move the data.
>
> Since we don't have baremetal Cinder, we had to come up with a way to
> do this, so we used Nova rebuild, and slipped it a special command that
> said "don't overwrite the partition you'd normally make the 'ephemeral'"
> partition. This works fine, but it is confusing and limiting. We'd like
> something better.
>
> I had an interesting discussion with Devananda in which he suggested an
> alternative approach. If we were to bring up cinder-volume on our deploy
> ramdisks, and configure it in such a way that it claimed ownership of
> the section of disk we'd like to preserve, then we could allocate that
> storage as a volume. From there, we could boot from volume, or "attach"
> the volume to the instance (which would really just tell us how to find
> the volume). When we want to write a new image, we can just delete the old
> instance and create a new one, scheduled to wherever that volume already
> is. This would require the nova scheduler to have a filter available
> where we could select a host by the volumes it has, so we can make sure to
> send the instance request back to the box that still has all of the data.
>
> Alternatively we can keep on using rebuild, but let the volume model the
> preservation rather than our special case.
>
> Thoughts? Suggestions? I feel like this might take some time, but it is
> necessary to consider it now so we can drive any work we need to get it
> done soon.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas



More information about the OpenStack-dev mailing list