[Openstack-operators] Configuring local instance storage

Tim Bell Tim.Bell at cern.ch
Thu May 8 15:57:30 UTC 2014


The difference between RAW and QCOW2 is pretty significant... what hypervisor are you using ?

Have you seen scenarios where the two SSDs are failing at the same time ? Red Hat was recommending against mirroring SSDs as with the same write pattern, the failure points for the same batch of SSDs would be close.


-----Original Message-----
From: Robert van Leeuwen [mailto:Robert.vanLeeuwen at spilgames.com] 
Sent: 08 May 2014 15:03
To: Arne Wiebalck; openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Configuring local instance storage

> In our cloud we use the non-shared local fs of the compute for instance storage. 
> As our cloud gets more busy, this is now more and more becoming a serious bottleneck.
> When discussing the various options to set this up, we wondering how 
> other clouds deal with the problem of compute disk contention in general and the integration of SSDs in particular.
>So, any suggestions or experiences in this area you'd like to share would be very welcome!

Hi Arne,

We run all our compute nodes with SSDs for local storage.

We optimized for 2 different flavors.
Based on the flavor the instance will end up on the correct hypervisor.
* normal instances: The are hosted on a SSD raid 1, using QCOW2 disk format
* fastio instances (e.g. for our database team): These are hosted on a bigger raid 10 volume of SSDs and use RAW disk format

We noticed a very big impact of QCOW2 vs RAW in our IOPS test: 
About a factor 10 with random 16k writes.

Since we have mostly internal customers we were also able to optimize the images.
We made sure they do not do any unnecessary IOs.
E.G. no local logging, everything goes to our central log servers.

Robert van Leeuwen

OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org

More information about the OpenStack-operators mailing list