[openstack-dev] Feedback wanted please on proposal to make root disk size variable

Day, Phil philip.day at hp.com
Mon Jun 10 09:53:48 UTC 2013


Hi Rob,

> Phil - I don't quite understand the operational use case: resizing the root-fs is an in-OS operation, and snapshotting takes 
> advantage of COW semantics if qcow2 or similar backing storage is used. How does making the size of the root exactly match 
> that of the image make snapshotting more efficient in that case? Or is it for deployments [there may be some ;)] that don't
> use qcow2 style backing storage?

For sure everyone should configure to use qcow2 so they create spares images - this is more about the cloud provider being able to impose an upper limit on the root disc size (via the flavor) and/or the image owner being able to say what size they want the root disc to be (via image metadata) so that they can have a smaller root (and hence ensure a faster download / snapshot time) if they want to.

If as the image owner you don't want to do that, then setting min_disk=0 will just use the root_gb size in the flavor - so I think your use case is still covered.

Cheers,
Phil

-----Original Message-----
From: Robert Collins [mailto:robertc at robertcollins.net] 
Sent: 05 June 2013 20:25
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Feedback wanted please on proposal to make root disk size variable

On 6 June 2013 06:50, Scott Moser <smoser at ubuntu.com> wrote:

> The time and IO it takes to create a filesystem on a "device" (file or 
> block device, doens't really matter) is very real.  I think the 
> caching of that is not an insignificant performance improvement.
>
> Heres an example:
> $ truncate --size=20G my.img
> $ ls -s my.img
> 0 my.img
> $ time mkfs.ext3 -F my.img >/dev/null 2>&1 real  0m15.279s user  
> 0m0.020s
> sys   0m0.980s
> $ ls -s my.img
> 464476 my.img
>
> So it looks to me that it did ~400M of IO in order to put a filesystem 
> on a 20G image.  If you push that off to the guest, its only going to 
> perform worse (as IO will be slower).

For ext4 this is:
$ truncate --size=20G my.img
$ ls -s my.img
0 my.img
$ time mkfs.ext4 -F my.img >/dev/null 2>&1

real    0m1.408s
user    0m0.060s
sys     0m0.096s
$ ls -s my.img
135480 my.img


Does anyone use ext3 these days? ;)

Seriously though, I think keeping existing expectations as simple and robust as possible is important. I know I was weirded out ~ a year back when I spun up an HPCS ultra large instance and got 10G on / and 1T on /mnt : on EC2 when you grab an ultra large / is the thing that is large ;).

Phil - I don't quite understand the operational use case: resizing the root-fs is an in-OS operation, and snapshotting takes advantage of COW semantics if qcow2 or similar backing storage is used. How does making the size of the root exactly match that of the image make snapshotting more efficient in that case? Or is it for deployments [there may be some ;)] that don't use qcow2 style backing storage? Things like Ceph mounted block devices will still be doing dirty block tracking, so should be super efficient (and be able to do COW snapshots even more efficiently in fact).

One thing I will note is that as a user, I *do not want* to specify the root size in the images I build: thats what flavor is for, it's how I define the runtime environment for my machine images; so - if we do make flavor have more dynamic roots - I think it would be very helpful to make sure that that can be overridden by the user [or that the user has to opt into it]. (I realise that implies that python-novaclient changes as well, not to mention documention).

-Rob

--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Cloud Services

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list