[openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

Wangpan hzwangpan at corp.netease.com
Fri Jan 17 07:30:59 UTC 2014


Hi Chet,
I have read this patch which may be commited by your workmate https://review.openstack.org/#/c/63254
and I have a question to ask:
Case 1:
An user want to build a 8vcpu instance, there may have seven flavors with 8vcpu which have different topology extra specs:
1s*4c*2t (s=sockets, c=cores, t=threads)
1s*8c*1t
2s*4c*1t
2s*2c*2t
4s*2c*1t
4s*1c*2t
8s*1c*1t
if the user is building this instance manaually, such as CLI or horizon, he know the supported topology of image, and can choose the flavor/topology he really want(eg. 2s*4c*1t),
but if the user is building instance through nova RESTful api from another service such as heat, which flavor should be chosen and howt to choose?
one more serious problem is that, even the user is a real people, he may don't know how to choose a flavor with best topology.

We should choose the `better` topology, may not be the best one, for all users if he want(set the topology in image property), otherwise they use the default one(vcpu num=socket num).

2014-01-17



Wangpan



发件人:Chet Burgess <cfb at metacloud.com>
发送时间:2013-12-22 07:28
主题:Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology
收件人:"OpenStack Development Mailing List (not for usage questions)"<openstack-dev at lists.openstack.org>
抄送:

After reading up on the proposed design I have some concerns, primarily around the use of image properties to represent the topology.


While I see the relationship between images and CPU topology (as referenced in the wiki Windows licenses and its restrictions on sockets being a prime example) it seems very confusing to be defining information about the CPU topology in 2 places. Flavors already define a maximal number of CPUs that can be allocated and all scheduling decisions related to CPU today use the value of VCPU specified by the flavor. 


I foresee the following operational issues with having these split:
Having CPU topology restrictions in the image may lead to the inability to resize VMs to take advantage of additional compute power. Its not uncommon in enterprise deployments for VMs to be resized as the need for the services running on the VM increases. If the image is defining a portion of the topology then resizing a VM may result in an incompatible topology or a sub-optimial topology. This could lead to resizes requiring a rebuild of the VM.
A single image may have a number of  valid CPU topologies. Work would have to be done to allow the user to select which topology they wanted or images would have to be duplicated multiple times just to specify alternate, valid CPU topologies.


The flavor should specify the CPU topology as well as the maximum VCPU count. This should allow resizes to work with minimal change and it avoids the need for complex selection logic from multiple valid topologies, or duplication of images. Additionally, the path of least resistance is to simply represent this as extra_specs on the flavor. Finally extra_specs has the benefit of already being fully supported by the CLI and Horizon.


Images would still need the ability to specify restrictions on the topology. It should be fairly easy to enhance the existing core filter of the scheduler to handle the basic compatibility checks required to validate that a a given image and flavor are compatible (Note: I suspect this has to occur regardless of the implementation as having the image specify the topology could still lead to incompatible combinations). Adding restrictions 


--
Chet Burgess
Vice President, Engineering | Metacloud, Inc.
Email: cfb at metacloud.com | Tel: 855-638-2256, Ext. 2428


On Nov 19, 2013, at 4:15 , Daniel P. Berrange <berrange at redhat.com> wrote:


For attention of maintainers of Nova virt drivers

A while back there was a bug requesting the ability to set the CPU
topology (sockets/cores/threads) for guests explicitly

  https://bugs.launchpad.net/nova/+bug/1199019

I countered that setting explicit topology doesn't play well with
booting images with a variety of flavours with differing vCPU counts.

This led to the following change which used an image property to
express maximum constraints on CPU topology (max-sockets/max-cores/
max-threads) which the libvirt driver will use to figure out the
actual topology (sockets/cores/threads)

 https://review.openstack.org/#/c/56510/

I believe this is a prime example of something we must co-ordinate
across virt drivers to maximise happiness of our users.

There's a blueprint but I find the description rather hard to
follow

 https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology

So I've created a standalone wiki page which I hope describes the
idea more clearly

 https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology

Launchpad doesn't let me link the URL to the blueprint since I'm not
the blurprint creator :-(

Anyway this mail is to solicit input on the proposed standard way to
express this which is hypervisor portable and the addition of some
shared code for doing the calculations which virt driver impls can
just all into rather than re-inventing

I'm looking for buy-in to the idea from the maintainers of each
virt driver that this conceptual approach works for them, before
we go merging anything with the specific impl for libvirt.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140117/19ed27ff/attachment.html>


More information about the OpenStack-dev mailing list