[openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

John Garbutt john at johngarbutt.com
Wed Nov 20 10:49:00 UTC 2013


On 20 November 2013 10:19, Daniel P. Berrange <berrange at redhat.com> wrote:
> On Wed, Nov 20, 2013 at 11:18:24AM +0800, Wangpan wrote:
>> Hi Daniel,
>>
>> Thanks for your help in advance.
>> I have read your wiki page and it explains this issue very clearly.
>> But I have a question about the 'technical design', you give us a prototype method as below:
>> def get_guest_cpu_topology(self, inst_type, image, preferred_topology, mandatory_topology):
>> my question is that, how/where we can get these two parameters 'preferred_topology, mandatory_topology'?
>> from the nova config file? or get from the hypervisor?
>
> I expected that those would be determined dynamically by the virt
> drivers using a variety of approaches. Probably best to start off
> with it fairly simple. For example preferred topology may simpy
> be used to express that the driver prefers use of sockets and cores,
> over threads, while mandatory topology would encode the maximum
> it was prepared to accept.
>
> So I might say libvirt would just go for a default of
>
>   preferred = { max_sockets = 64, max_cores = 4, max_threads = 1 }
>   mandatory = { max_sockets = 256, max_cores = 8, max_threads = 2 }
>
> When libvirt gets some more intelligence  around NUMA placement, allowing
> it to map guest NUMA topology to host NUMA topology, then it might get more
> inventive tailoring the 'preferred' value to match how it wants to place
> the guest.
>
> For example, if libvirt is currently trying to fit guests entirely on to
> one host socket by default then it might decide to encourage use of cores
> by saying
>
>    preferred = { max_sockets = 1, max_cores = 8, max_threads = 2 }
>
> Conversely if it knows that the memory size of the flavour will exceed
> one NUMA node, then it will want to force use of multiple sockets and
> discourage cores/threads
>
>    preferred = { max_sockets = 64, max_cores = 2, max_threads = 1 }
>
> Again, I think we want to just keep it fairly dumb & simple to start
> with.
>
>> 发件人:"Daniel P. Berrange" <berrange at redhat.com>
>> 发送时间:2013-11-19 20:15
>> 主题:[openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology
>> 收件人:"openstack-dev"<openstack-dev at lists.openstack.org>
>> 抄送:
>>
>> For attention of maintainers of Nova virt drivers
>>
>> A while back there was a bug requesting the ability to set the CPU
>> topology (sockets/cores/threads) for guests explicitly
>>
>>    https://bugs.launchpad.net/nova/+bug/1199019
>>
>> I countered that setting explicit topology doesn't play well with
>> booting images with a variety of flavours with differing vCPU counts.
>>
>> This led to the following change which used an image property to
>> express maximum constraints on CPU topology (max-sockets/max-cores/
>> max-threads) which the libvirt driver will use to figure out the
>> actual topology (sockets/cores/threads)
>>
>>   https://review.openstack.org/#/c/56510/
>>
>> I believe this is a prime example of something we must co-ordinate
>> across virt drivers to maximise happiness of our users.
>>
>> There's a blueprint but I find the description rather hard to
>> follow
>>
>>   https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
>>
>> So I've created a standalone wiki page which I hope describes the
>> idea more clearly
>>
>>   https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology
>>
>> Launchpad doesn't let me link the URL to the blueprint since I'm not
>> the blurprint creator :-(
>>
>> Anyway this mail is to solicit input on the proposed standard way to
>> express this which is hypervisor portable and the addition of some
>> shared code for doing the calculations which virt driver impls can
>> just all into rather than re-inventing
>>
>> I'm looking for buy-in to the idea from the maintainers of each
>> virt driver that this conceptual approach works for them, before
>> we go merging anything with the specific impl for libvirt.

This seems to work from a XenAPI perspective.

Right now I feel I would ignore threads and really just worry about a
setting for "cores-per-socket":
http://support.citrix.com/article/CTX126524

But that would certainly be very useful for windows guests.

John



More information about the OpenStack-dev mailing list