[openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology
Vui Chiap Lam
vuichiap at vmware.com
Tue Dec 3 07:05:02 UTC 2013
I too found the original bp a little hard to follow, so thanks for
writing up the wiki! I see that the wiki is now linked to the BP,
which is great as well.
The ability to express CPU topology constraints for the guests
has real-world use, and several drivers, including VMware, can definitely
benefit from it.
If I understand correctly, in addition to being an elaboration of the
BP text, the wiki also adds the following:
1. Instead of returning the besting matching (num_sockets (S),
cores_per_socket (C), threads_per_core (T)) tuple, all applicable
(S,C,T) tuples are returned, sorted by S then C then T.
2. A mandatory topology can be provided in the topology computation.
I like 2. because there are multiple reasons why all of a hypervisor's
CPU resources cannot be allocated to a single virtual machine.
Given that the mandatory (I prefer maximal) topology is probably fixed
per hypervisor, I wonder this information should also be used in
scheduling time to eliminate incompatible hosts outright.
As for 1. because of the order of precendence of the fields in the
(S,C,T) tuple, I am not sure how the preferred_topology comes into
play. Is it meant to help favor alternative values of S?
Also it might be good to describe a case where returning a list of
(S,C,T) instead of best-match is necessary. It seems deciding what to
pick other that the first item in the list requires logic similar to
that used to arrive at the list in the first place.
----- Original Message -----
| From: "Daniel P. Berrange" <berrange at redhat.com>
| To: openstack-dev at lists.openstack.org
| Sent: Monday, December 2, 2013 7:43:58 AM
| Subject: Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology
| On Tue, Nov 19, 2013 at 12:15:51PM +0000, Daniel P. Berrange wrote:
| > For attention of maintainers of Nova virt drivers
| Anyone from Hyper-V or VMWare drivers wish to comment on this
| > A while back there was a bug requesting the ability to set the CPU
| > topology (sockets/cores/threads) for guests explicitly
| > https://bugs.launchpad.net/nova/+bug/1199019
| > I countered that setting explicit topology doesn't play well with
| > booting images with a variety of flavours with differing vCPU counts.
| > This led to the following change which used an image property to
| > express maximum constraints on CPU topology (max-sockets/max-cores/
| > max-threads) which the libvirt driver will use to figure out the
| > actual topology (sockets/cores/threads)
| > https://review.openstack.org/#/c/56510/
| > I believe this is a prime example of something we must co-ordinate
| > across virt drivers to maximise happiness of our users.
| > There's a blueprint but I find the description rather hard to
| > follow
| > https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
| > So I've created a standalone wiki page which I hope describes the
| > idea more clearly
| > https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology
| > Launchpad doesn't let me link the URL to the blueprint since I'm not
| > the blurprint creator :-(
| > Anyway this mail is to solicit input on the proposed standard way to
| > express this which is hypervisor portable and the addition of some
| > shared code for doing the calculations which virt driver impls can
| > just all into rather than re-inventing
| > I'm looking for buy-in to the idea from the maintainers of each
| > virt driver that this conceptual approach works for them, before
| > we go merging anything with the specific impl for libvirt.
| |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
| |: http://libvirt.org -o- http://virt-manager.org :|
| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
| OpenStack-dev mailing list
| OpenStack-dev at lists.openstack.org
More information about the OpenStack-dev