[openstack-dev] Feedback wanted please on proposal to make root disk size variable

Day, Phil philip.day at hp.com
Wed Jun 5 17:05:24 UTC 2013


I guess the other way to look at this is that users would have a choice between a possibly faster start-up time by keeping to a few well know root disc sizes in the images or a fully custom root disk size but with less chance of a faster start-up time.

From the host perspective it has to clean up base files that are no longer in use anyway, so I don't see a big penalty in having a lot of different base files, just a lack of optimization in the user experience - or am I missing something ?

Phil

-----Original Message-----
From: Day, Phil 
Sent: 05 June 2013 17:43
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Feedback wanted please on proposal to make root disk size variable

Hi Scott,

I take your point about the number of backing files - which could in the worst case end up as one for every instance on the host. 

I use the term worst case loosely though, as there are times where having a filesystem in the base file can be a pain - for example if you have two VMs on the same system that are expecting the uuids of their filesystems to always be unique then they'll be in for a shock, and also on migration to a host that already has a backing file of that size then you either have to squash the COW layer of the VM you’re migrating ,or re-base it and accept that the filesystem uuid is going to change.

I have been wondering if the common backing files are really that much of a gain ?

Phil 
 

-----Original Message-----
From: Scott Moser [mailto:smoser at ubuntu.com]
Sent: 05 June 2013 14:53
To: OpenStack Development Mailing List
Cc: Leggett, Thomas; Toft, Peter (HP Cloud Services)
Subject: Re: [openstack-dev] Feedback wanted please on proposal to make root disk size variable

On Mon, 3 Jun 2013, Day, Phil wrote:

> Hi Folks,
>
> I'd like to get your feedback on a change we've been looking at which would allow the root disc size to vary within the constraints specified by the image Creator (via image metadata ) and the Cloud Provider (via flavors).
>
> The problem we're trying to solve is to cope with a range of images that have different root disc requirements without having to either create specific flavours for each image type of have all instances use a large root size when they don't need to.    Imagine for example trying to have a common set of flavours for Linux and Windows without forcing all Linux instances to have a 30GB root disc.
>
> We think we have an approach which will do this without preventing 
> anyone's existing use cases.  The proposal is capture here as a 
> blueprint 
> https://blueprints.launchpad.net/nova/+spec/variable-size-root-disk
> which in turn points to a full description here in the Wiki: 
> https://wiki.openstack.org/wiki/VariableSizeRootDisk
>
> We've also posted draft code that shows how this could be implemented
> here:  https://review.openstack.org/#/c/31521/

The only thing that I see that I find odd is:
 | The size of the ephemeral disk for the instance  | (instance['ephemeral_gb']) is created to be (flavor.root_gb +  | flavor.ephemeral_gb) – instance.root_gb

I understand the motivation, but I think its limiting.  I've always [dangerously] assumed that ephemeral store came basically from the realization by Amazon that the systems they were buying had obscene amounts of local disk that were generally going un-used.  They basically, then cut up the un-used space by the into the number of instances per host and give each instance a share of it. In most cases on EC2, you get more than one ephemeral disk.

Given the above, ephemeral storage (and even "instance store" root disks, are essentially "free" to both the customer and the provider.  On Amazon, you pay for increased EBS (cinder) storage, so you're motivated to have a lower amount of that.  So it doesn't make sense to me to relate root size to ephemeral size.

The other thing I don't like about it is that, currently (at least in libvirt driver), ephemeral disks are cached.  This is because ephemeral disks are attached with a filesystem already created on them. That allows the user to immediately use the disk without having to do mkfs, and the caching of that image saves the host from having to do it on every instance launch.

If you use:
 ephemeral_gb = (flavor.root_gb + flavor.ephemeral_gb) - instance.root_gb and instance.root_gb all of a sudden varies greatly amoung instances, then the host will be creating ephemeral images (and mkfs'ing them) possibly for every instance.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list