[Openstack-operators] Tuning I/O with SSDs

Kostiantyn.Volenbovskyi at swisscom.com Kostiantyn.Volenbovskyi at swisscom.com
Fri Aug 5 16:33:24 UTC 2016


Hi, 

-for guest: noop does indeed look like best-choice unless there are special reasons.
https://wiki.openstack.org/wiki/Documentation/HypervisorTuningGuide#Instance_and_Image_Configuration confirms that.
Even though https://access.redhat.com/solutions/5427 " *On the other hand, depending on the workload, it can also be beneficial to use a scheduler like deadline in the guest.". So noop in guest is not exactly 'no-brainer'

-for host: 
>From same https://access.redhat.com/solutions/5427 :
" *When using RHEL as a host for virtualized guests, the default cfq scheduler is usually ideal. This scheduler performs well on nearly all workloads. 
*If, however, minimizing I/O latency is more important than maximizing I/O throughput on the guest workloads, it may be beneficial to use the deadline scheduler . The deadline is also the scheduler used by the tuned profile virtual-host."
But post http://lists.openstack.org/pipermail/openstack/2015-July/013267.html  from Dan Yocum [RedHat] states 'noop' for SSD...

All in all, most likely there is no perfect recipe at least for host IO scheduler - there is fairly-complex combination of factors.
For me it sounds that priority in which schedulers for host OS to be evaluated in generic case for SSD is
deadline
cfq
noop
Very rough estimation that in case someone would make optimal decision based on their workload (and not just expect that default one in their Linux distribution is most optimal)- maybe distribution would be like 60/30/10 respectively. 
I do have experience where cfq was preferred over deadline after benchmarking [but 'never' noop was chosen].

And somehow I imagine that exactly this topic something was something evaluated by Tim&colleagues already ;)


BR, 
Konstantin



-----Original Message-----
From: gustavo panizzo (gfa) [mailto:gfa at zumbi.com.ar] 
Sent: Friday, August 05, 2016 5:43 PM
To: Volenbovskyi Kostiantyn, INI-ON-FIT-CXD-ELC <Kostiantyn.Volenbovskyi at swisscom.com>
Cc: Tim.Bell at cern.ch; openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Tuning I/O with SSDs

On Fri, Aug 05, 2016 at 12:09:50PM +0000, Kostiantyn.Volenbovskyi at swisscom.com wrote:
 
> 3)     The question of cfq vs. deadline vs. noop scheduler (apparently both in guest and host) where decision should be based on workloads/recommendations of OS vendor (/which again might be release-dependent).

on the vm the scheduler should be noop, so the vm does not do any kind of ordering and passes that work to the hypervisor (which knows better)

on the hypervisor scheduler should be deadline, which is the recommended scheduler for SSDs

there are rh articles on the net, also I think tuned does that by default. I set that by hand on my debian images as debian default is CFQ



--
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

keybase: http://keybase.io/gfa



More information about the OpenStack-operators mailing list