[openstack-dev] Extension to volume creation (filesystem and label)
Greg Poirier
greg.poirier at opower.com
Mon Aug 12 18:00:52 UTC 2013
On Mon, Aug 12, 2013 at 10:26 AM, Clint Byrum <clint at fewbar.com> wrote:
> Like others, I am a little dubious about adding a filesystem to these
> disks for a number of reasons. It feels like a violation of "its just
> a bunch of bits".
>
I actually think that it's a valid concern. I've been trying to come up
with a stable, reasonable solution ever since I sent the original e-mail. :)
> Have you considered putting a GPT on it instead?
We have.
> With a GPT you have a UUID for the disk which you can communicate to the
> host via metadata service. With that you can instruct gdisk to partition
> the right disk programattically and create the filesystem with native
> in-instance tools.
>
I'm not sure that this is any different from:
- Examine current disk devices
- Attach volume
- Examine current disk devices
- Get device ID from diff
- Do something
That seems to be pretty much the pattern that everyone has used to solve
this problem. What this says to me is that this is a common problem, and
perhaps it is a failing of Cinder to simply provide this functionality.
Even if it doesn't bother creating a filesystem, it seems like it should
make a best effort to ensure that the volume is identifiable within the
instance after attachment--as opposed to the current implementation of
"throw hands up in the air and have the state lie about the device name of
the volume". Currently we have metadata that says it's /dev/vdc, when in
reality it's /dev/vdb. That's a bug, imo.
> This is pure meta-data, and defines a lot less than a filesystem, so it
> feels like a bigger win for the general purpose case of volumes. It will
> work for any OS that supports GPT, which is likely _every_ modern PC OS.
>
>
Honestly, the only reason we were considering putting the filesystem on it
was to use tune2fs to put a label (specifically the volume ID) directly
attached to the filesystem. If we can manage to store the state of the
volume attachment in the metadata service and ensure the validity of that
data, then we will go that route. We simply haven't been able to do that
without some kind of wonkiness.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130812/7f8df004/attachment.html>
More information about the OpenStack-dev
mailing list