[openstack-dev] Questions on the virtual disk's cache type
namei.unix at gmail.com
Wed Jan 23 11:04:46 UTC 2013
On 01/23/2013 06:47 PM, Liu Yuan wrote:
> On 01/23/2013 06:14 PM, Daniel P. Berrange wrote:
>> On Wed, Jan 23, 2013 at 06:09:01PM +0800, Liu Yuan wrote:
>>> On 01/23/2013 05:30 PM, Daniel P. Berrange wrote:
>>>> FYI There is a patch proposed for customization
>>> Seems that this patch is dropped and declined?
>>>> I should note that it is wrong to assume that enabling cache mode will
>>>> improve the performance in general. Allowing caching in the host will
>>>> require a non-negligable amount of host RAM to have a benefit. RAM is
>>>> usually the most constrained resource in any virtualization environment.
>>>> So while the cache may help performance when only one or two Vms are
>>>> running on the host, it may well in fact hurt performance once the host
>>>> is running enough VMs to max out RAM. So allowing caching will actually
>>>> give you quite variable performance, while the cache=none will give you
>>>> consistent performance regardless of host RAM utilization (underlying
>>>> contention of the storage device may of course still impact things).
>>> Yeah, allowing page cache in the host might not be a good idea to run
>>> multiple VMs, but cache type in QEMU has different meaning for network
>>> block devices. For e.g, we use 'cache type' to control client side cache
>>> of Sheepdog cluster, which implement a object cache in the local disk
>>> for performance boost and reducing network traffics. This doesn't
>>> consume memory at all, just occupy the disk space where runs sheep daemon.
>> That is a serious abuse of the QEMU cache type variable. You now have one
>> setting with two completely different meanings for the same value. If you
>> want to control whether the sheepdog driver uses a local disk for object
>> cache you should have a completely separate QEMU command line setting
>> which can be controlled independantly of the cache= setting.
> Hello Stefen and Kevin,
> Should sheepdog driver use another new command setting to control its
> internal cache?
> For network block device, which simply forward the IO requests from
> VMs via networking and never have chance to touch host's memory, I think
> it is okay to multiplex the 'cache=type', but it looks that it causes
> confusion for libvirt code.
Since there is a ongoing patch set that allow customization of
'cache=type', then there is no problem both for libvirt and QEMU to
multiplex this setting if this patch is allowed to get merged, since
user should know the different meanings of cache type to the underlying
block device, be it simulated by file (which use host memory page to
cache blocks) or a network block device (which use whatever other
storage to cache blocks).
More information about the OpenStack-dev