[openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

Day, Phil philip.day at hp.com
Tue Dec 3 14:14:40 UTC 2013


Hi Daniel,

I spent some more time reading your write up on the wiki (and it is a great write up BTW), and had a couple of further questions (I think my original ones are also still valid, but do let me know if / where I'm missing the point):

iv) In the worked example where do the preferred_topology and mandatory_topology come from ?  (For example are these per host configuration values)

v) You give an example where its possible to get the situation where the combination of image_hw_cpu_topology, flavour means the instance can't be created (vcpus=2048) but that looks more like a flavour misconfiguration (unless there is some node that does have that many vcpus).   The case that worries me more is where, for example an image says it need "max-sockets=1" and the flavour says it needs more vcpus that can be provided from a single socket.   In this case the flavour is still valid, just not with this particular image - and that feels like a case that should fail validation at the API layer, not down on the compute node where the only option is to reschedule or go into an Error state.

Phil  


> -----Original Message-----
> From: Day, Phil
> Sent: 03 December 2013 12:03
> To: 'Daniel P. Berrange'; OpenStack Development Mailing List (not for usage
> questions)
> Subject: RE: [openstack-dev] [Nova] Blueprint: standard specification of
> guest CPU topology
> 
> Hi,
> 
> I think the concept of allowing users to request a cpu topology, but have a
> few questions / concerns:
> 
> >
> > The host is exposing info about vCPU count it is able to support and
> > the scheduler picks on that basis. The guest image is just declaring
> > upper limits on topology it can support. So If the host is able to
> > support the guest's vCPU count, then the CPU topology decision should
> > never cause any boot failure As such CPU topology has no bearing on
> > scheduling, which is good because I think it would significantly complicate
> the problem.
> >
> 
> i) Is that always true ?    Some configurations (like ours) currently ignore vcpu
> count altogether because what we're actually creating are VMs that are "n"
> vcpus wide (as defined by the flavour) but each vcpu is only some subset of
> the processing capacity of a physical core (There was a summit session on
> this: http://summit.openstack.org/cfp/details/218).  So if vcpu count isn't
> being used for scheduling, can you still guarantee that all topology selections
> can always be met ?
> 
> ii) Even if you are counting vcpus and mapping them 1:1 against cores, are
> there not some topologies that are either more inefficient in terms of overall
> host usage and /or incompatible with other topologies (i.e. leave some
> (spare) resource un-used in way that it can't be used for a specific topology
> that would otherwise fit) ?     As a provider I don't want users to be able to
> determine how efficiently (even indirectly) the hosts are utilised.   There
> maybe some topologies that I'm willing to allow (because they always pack
> efficiently) and others I would never allow.   Putting this into the control of
> the users via image metadata feels wrong in that case.     Maybe flavour
> extra-spec (which is in the control of the cloud provider) would be a more
> logical fit for this kind of property ?
> 
> iii) I can see the logic of associating a topology with an image - but don't really
> understand how that would fit with the image being used with different
> flavours.  What happens if a topology in the image just can't be implemented
> within the constraints of a selected flavour ?    It kind of feels as if we either
> need a way to constrain images to specific flavours, or perhaps allow an
> image to express a preferred flavour / topology, but allow the user to
> override these as part of the create request.
> 
> Cheers,
> Phil
> 




More information about the OpenStack-dev mailing list