[openstack-dev] [nova] Core pinning
Daniel P. Berrange
berrange at redhat.com
Wed Nov 27 14:24:51 UTC 2013
On Wed, Nov 27, 2013 at 03:50:47PM +0200, Tuomas Paappanen wrote:
> >On Tue, 2013-11-19 at 12:52 +0000, Daniel P. Berrange wrote:
> >>I think there are several use cases mixed up in your descriptions
> >>here which should likely be considered independantly
> >>
> >> - pCPU/vCPU pinning
> >>
> >> I don't really think this is a good idea as a general purpose
> >> feature in its own right. It tends to lead to fairly inefficient
> >> use of CPU resources when you consider that a large % of guests
> >> will be mostly idle most of the time. It has a fairly high
> >> administrative burden to maintain explicit pinning too. This
> >> feels like a data center virt use case rather than cloud use
> >> case really.
> >>
> >> - Dedicated CPU reservation
> >>
> >> The ability of an end user to request that their VM (or their
> >> group of VMs) gets assigned a dedicated host CPU set to run on.
> >> This is obviously something that would have to be controlled
> >> at a flavour level, and in a commercial deployment would carry
> >> a hefty pricing premium.
> >>
> >> I don't think you want to expose explicit pCPU/vCPU placement
> >> for this though. Just request the high level concept and allow
> >> the virt host to decide actual placement
> I think pcpu/vcpu pinning could be considered like an extension for
> dedicated cpu reservation feature. And I agree that if we
> exclusively dedicate pcpus for VMs it is inefficient from cloud
> point of view, but in some case, end user may want to be sure(and
> ready to pay) that their VMs have resources available e.g. for
> sudden load peaks.
>
> So, here is my proposal how dedicated cpu reservation would function
> on high level:
>
> When an end user wants VM with nn vcpus which are running on
> dedicated host cpu set, admin could enable it by setting a new
> "dedicate_pcpu" parameter in a flavor(e.g. optional flavor
> parameter). By default, amount of pcpus and vcpus could be same. And
> as option, explicit vcpu/pcpu pinning could be done by defining
> vcpu/pcpu relations to flavors extra specs(vcpupin:0 0...).
>
> In the virt driver there is two alternatives how to do the pcpu
> sharing 1. all dedicated pcpus are shared with all vcpus(default
> case) or 2. each vcpu has dedicated pcpu(vcpu 0 will be pinned to
> the first pcpu in a cpu set, vcpu 1 to the second pcpu and so on).
> Vcpu/pcpu pinning option could be used to extend the latter case.
>
> In any case, before VM with or without dedicated pcpus is launched
> the virt driver must take care of that the dedicated pcpus are
> excluded from existing VMs and from a new VMs and that there are
> enough free pcpus for placement. And I think minimum amount of pcpus
> for VMs without dedicated pcpus must be configurable somewhere.
>
> Comments?
I still don't believe that vcpu:pcpu pinning is something we want
to do, even with dedicated CPUs. There are always threads in the
host doing work on behalf of the VM that are not related to vCPUs.
For example the main QEMU emulator thread, the QEMU I/O threads,
kernel threads. Other hypervisors have similar behaviour. It is
better to let the kernel / hypervisor scheduler decide how to
balance the competing workloads than forcing a fixed & suboptimally
performing vcpu:pcpu mapping. The only time I've seen fixed pinning
make a consistent benefit is when you have NUMA involved and want to
prevent a VM spanning NUMA nodes. Even then you'd just be best pinning
to the set of CPUs in a node and then letting the vCPUs float amonst
the pCPUs in that node.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
More information about the OpenStack-dev
mailing list