[Openstack-operators] KVM memory overcommit with fast swap

Blair Bethwaite blair.bethwaite at gmail.com
Mon Jun 29 13:36:52 UTC 2015


Hi all,

Question up-front:

Do the performance characteristics of modern PCIe attached SSDs
invalidate/challenge the old "don't overcommit memory" with KVM wisdom
(recently discussed on this list and at meetups and summits)? Has
anyone out there tried & tested this?

Long-form:

I'm currently looking at possible options for increasing virtual
capacity in a public/community KVM based cloud. We started very
conservatively at a 1:1 cpu allocation ratio, so perhaps predictably
we have boatloads of CPU headroom to work with. We also see maybe 50%
memory actually in-use on a host that is, from Nova's perspective,
more-or-less full.

The most obvious thing to do here is increase available memory. There
are at least three ways to achieve that:
1/ physically add RAM
2/ reduce RAM per vcore (i.e., introduce lower RAM flavors)
3/ increase virtual memory capacity (i.e., add swap) and make
ram_allocation_ratio > 1

We're already doing a bit of #2, but at the end of the day, taking
away flavors and trying to change user behaviour is actually harder
than just upgrading hardware. #1 is ideal but I do wonder whether we'd
be better to spend that same money on some PCIe SSD and use it for #3
(at least for our 'standard' flavor classes), the advantage being that
SSD is cheaper per GB (and it might also help alleviate IOPs
starvation for local storage based hosts)...

The question is whether the performance characteristics of modern PCIe
attached SSDs invalidate the old "don't overcommit memory" with KVM
wisdom (recently discussed on this list:
http://www.gossamer-threads.com/lists/openstack/operators/46104 and
also apparently at the Kilo mid-cycle:
https://etherpad.openstack.org/p/PHL-ops-capacity-mgmt where there was
an action to update the default from 1.5 to 1.0, though that doesn't
seem to have happened). Has anyone out there tried this?

I'm also curious if anyone has any recent info re. the state of
automated memory ballooning and/or memory hotplug? Ideally a RAM
overcommitted host would try to inflate guest balloons before
swapping.

-- 
Cheers,
~Blairo



More information about the OpenStack-operators mailing list