[Openstack-operators] Experience with Cinder volumes as root disks?

George Mihaiescu lmihaiescu at gmail.com
Wed Aug 2 12:42:23 UTC 2017


I totally agree with Jay, this is the best, cheapest and most scalable way to build a cloud environment with Openstack.

We use local storage as the primary root disk source which lets us make good use of the slots available in each compute node (6), and coupled with the Raid10 gives good I/O performance.

We also have a multi petabyte Ceph cluster that we use to store large genomics files in object format, as well as backend for Cinder volumes, but the primary use case for the Ceph cluster is not booting up the instances.

In this way, we have small failure domains, and if a VM does a lot of IO it only impacts a few other neighbours. The latency for writes is low, and we don't spend money (and drive slots) on SSD journals improving write latency only until the Ceph journal needs to flush.

Speed of provisioning is not a concern because anyway with a small image library, most of the popular ones are already cached on the compute nodes, and the time it takes for the instance to boot is just a small percentage of the total instance runtime (days or weeks).

The drawback is that maintenances requiring reboots need to be scheduled in advance, but I would argue that booting from a shared storage and having to orchestrate the live migration of 1000 instances from 100 compute nodes without performance impact for the workloads running there (some migrations could fail because intense CPU or memory activity) is not very feasible either...

George 



> On Aug 1, 2017, at 11:59, Jay Pipes <jaypipes at gmail.com> wrote:
> 
>> On 08/01/2017 11:14 AM, John Petrini wrote:
>> Just my two cents here but we started out using mostly Ephemeral storage in our builds and looking back I wish we hadn't. Note we're using Ceph as a backend so my response is tailored towards Ceph's behavior.
>> The major pain point is snapshots. When you snapshot an nova volume an RBD snapshot occurs and is very quick and uses very little additional storage, however the snapshot is then copied into the images pool and in the process is converted from a snapshot to a full size image. This takes a long time because you have to copy a lot of data and it takes up a lot of space. It also causes a great deal of IO on the storage and means you end up with a bunch of "snapshot images" creating clutter. On the other hand volume snapshots are near instantaneous without the other drawbacks I've mentioned.
>> On the plus side for ephemeral storage; resizing the root disk of images works better. As long as your image is configured properly it's just a matter of initiating a resize and letting the instance reboot to grow the root disk. When using volumes as your root disk you instead have to shutdown the instance, grow the volume and boot.
>> I hope this help! If anyone on the list knows something I don't know regarding these issues please chime in. I'd love to know if there's a better way.
> 
> I'd just like to point out that the above is exactly the right way to think about things.
> 
> Don't boot from volume (i.e. don't use a volume as your root disk).
> 
> Instead, separate the operating system from your application data. Put the operating system on a small disk image (small == fast boot times), use a config drive for injectable configuration and create Cinder volumes for your application data.
> 
> Detach and attach the application data Cinder volume as needed to your server instance. Make your life easier by not coupling application data and the operating system together.
> 
> Best,
> -jay
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



More information about the OpenStack-operators mailing list