[Openstack-operators] Configuring local instance storage
Abel Lopez
alopgeek at gmail.com
Thu May 8 19:50:58 UTC 2014
Second that question, Using KVM at least, I couldn't find any significant differences between QCOW2 and RAW based images.
By "significant", I mean, enough to justify tossing the benefits of qcow2.
On May 8, 2014, at 8:57 AM, Tim Bell <Tim.Bell at cern.ch> wrote:
>
> Robert,
>
> The difference between RAW and QCOW2 is pretty significant... what hypervisor are you using ?
>
> Have you seen scenarios where the two SSDs are failing at the same time ? Red Hat was recommending against mirroring SSDs as with the same write pattern, the failure points for the same batch of SSDs would be close.
>
> Tim
>
> -----Original Message-----
> From: Robert van Leeuwen [mailto:Robert.vanLeeuwen at spilgames.com]
> Sent: 08 May 2014 15:03
> To: Arne Wiebalck; openstack-operators at lists.openstack.org
> Subject: Re: [Openstack-operators] Configuring local instance storage
>
>> In our cloud we use the non-shared local fs of the compute for instance storage.
>> As our cloud gets more busy, this is now more and more becoming a serious bottleneck.
>>
>> When discussing the various options to set this up, we wondering how
>> other clouds deal with the problem of compute disk contention in general and the integration of SSDs in particular.
>>
>> So, any suggestions or experiences in this area you'd like to share would be very welcome!
>
> Hi Arne,
>
> We run all our compute nodes with SSDs for local storage.
>
> We optimized for 2 different flavors.
> Based on the flavor the instance will end up on the correct hypervisor.
> * normal instances: The are hosted on a SSD raid 1, using QCOW2 disk format
> * fastio instances (e.g. for our database team): These are hosted on a bigger raid 10 volume of SSDs and use RAW disk format
>
> We noticed a very big impact of QCOW2 vs RAW in our IOPS test:
> About a factor 10 with random 16k writes.
>
> Since we have mostly internal customers we were also able to optimize the images.
> We made sure they do not do any unnecessary IOs.
> E.G. no local logging, everything goes to our central log servers.
>
> Cheers,
> Robert van Leeuwen
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 535 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140508/ce3056bf/attachment.pgp>
More information about the OpenStack-operators
mailing list