[openstack-dev] realtime kvm cpu affinities

Henning Schild henning.schild at siemens.com
Wed Jun 21 08:42:28 UTC 2017


Am Tue, 20 Jun 2017 10:41:44 -0600
schrieb Chris Friesen <chris.friesen at windriver.com>:

> On 06/20/2017 01:48 AM, Henning Schild wrote:
> > Hi,
> >
> > We are using OpenStack for managing realtime guests. We modified
> > it and contributed to discussions on how to model the realtime
> > feature. More recent versions of OpenStack have support for
> > realtime, and there are a few proposals on how to improve that
> > further.
> >
> > But there is still no full answer on how to distribute threads
> > across host-cores. The vcpus are easy but for the emulation and
> > io-threads there are multiple options. I would like to collect the
> > constraints from a qemu/kvm perspective first, and than possibly
> > influence the OpenStack development
> >
> > I will put the summary/questions first, the text below provides more
> > context to where the questions come from.
> > - How do you distribute your threads when reaching the really low
> >    cyclictest results in the guests? In [3] Rik talked about
> > problems like hold holder preemption, starvation etc. but not
> > where/how to schedule emulators and io
> > - Is it ok to put a vcpu and emulator thread on the same core as
> > long as the guest knows about it? Any funny behaving guest, not
> > just Linux.
> > - Is it ok to make the emulators potentially slow by running them on
> >    busy best-effort cores, or will they quickly be on the critical
> > path if you do more than just cyclictest? - our experience says we
> > don't need them reactive even with rt-networking involved
> >
> >
> > Our goal is to reach a high packing density of realtime VMs. Our
> > pragmatic first choice was to run all non-vcpu-threads on a shared
> > set of pcpus where we also run best-effort VMs and host load.
> > Now the OpenStack guys are not too happy with that because that is
> > load outside the assigned resources, which leads to quota and
> > accounting problems.  
> 
> If you wanted to go this route, you could just edit the
> "vcpu_pin_set" entry in nova.conf on the compute nodes so that nova
> doesn't actually know about all of the host vCPUs.  Then you could
> run host load and emulator threads on the pCPUs that nova doesn't
> know about, and there will be no quota/accounting issues in nova.

Exactly that is the idea but OpenStack currently does not allow that.
No thread will ever end up on a core outside the vcpu_pin_set and
emulator/io-threads are controlled by OpenStack/libvirt. And you need a
way to specify exactly which cores outside vcpu_pin_set are allowed for
breaking out of that set.
On our compute nodes we also have cores for host-realtime tasks i.e.
dpdk-based rt-networking.

Henning

> Chris
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list