[openstack-dev] Next steps for Whole Host allocation / Pclouds

Day, Phil philip.day at hp.com
Mon Jan 20 15:18:18 UTC 2014


HI Folks,

The original (and fairly simple) driver behind whole-host-allocation (https://wiki.openstack.org/wiki/WholeHostAllocation) was to enable users to get guaranteed isolation for their instances.  This then grew somewhat along the lines of "If they have in effect a dedicated hosts then wouldn't it be great if the user could also control some aspect of the scheduling, access for other users, etc".    The Proof of Concept I presented at the Icehouse Design summit provided this by providing API extensions to in effect manipulate an aggregate and scheduler filters used with that aggregate.   https://etherpad.openstack.org/p/NovaIcehousePclouds

Based on the discussion and feedback from the design summit session it became clear that this approach was kind of headed into a difficult middle ground between a very simple approach for users who just wanted the isolation for their instances, and a fully delegated admin model which would allow any admin operation to be scoped to a specific set of servers/flavours/instances

I've spent some time since mulling over what it would take to add some kind of "scoped admin" capability into Nova, and my current thinking is that it would be a pretty big change because there isn't really a concept of "ownership" once you get beyond instances and a few related objects.   Also with TripleO its becoming easier to set up new copies of a Nova stack to control a specific set of hosts, and that in effect provides the same degree of scoped admin in a much more direct way.  The sort of model I'm thinking of here is a system where services such as Glance/Cinder and maybe Neutron are shared by a number of Nova services.    There are still a couple of things needed to make this work, such as limiting tenant access to regions on Keystone, but that feels like a better layer to try and address this kind of issue.

In terms of the original driver of just guaranteeing instance isolation then we could (as suggested by Alex Gilkson and others) implement this just as a new instance property with an appropriate scheduler filter (i.e. for this type of instance only allow scheduling to hosts that are either empty or running only instances for the same tenant).    The attribute would then be passed through in notification messages, etc for the billing system to process.
This would be pretty much the peer of AWS dedicated instances.

The host_state object already has the required num_instances_by_project data required by the scheduler filter, and the stats field in the compute manager resource tracker also has this information - so both the new filter and additional limits check on the compute manager look like they would be fairly straight forward to implement.

It's kind of beyond the scope of Nova, but the resulting billing model in this case is more complex -as the user isn't telling you explicitly how many dedicated hosts they are going to consume.  AWS just charge a flat rate per region for having any number of dedicated instances - if you wanted to charge per dedicated host then it'd difficult to warn the user before they create a new instance that they are about to branch onto a new host.

Would welcome thoughts on the above,
Phil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140120/855a9794/attachment.html>


More information about the OpenStack-dev mailing list