[openstack-dev] Next steps for Whole Host allocation / Pclouds

Sylvain Bauza sylvain.bauza at bull.net
Fri Jan 24 16:27:04 UTC 2014

Le 23/01/2014 18:17, Day, Phil a écrit :
> Just to be clear I'm not advocating putting any form of automated instance life-cycle into Nova - I agree that belongs in an external system like Climate.
> However for a reservations model to work efficiently it seems to be you need two complementary types of resource available - for every least you accept promising resource at a certain point you need to have some resource that you can free up, otherwise you have to allocate the resource now to be sure it will be available at the required time in the future (which kind of negates the point of a reservation).    That is unless you're an airline operator, in which case you can of course sell an infinite number of seats on any plane ;-)

I wish I would be an airline operator, but I'm not ;-)
That said, we ensure that we can match the needs in the future because 
we intentionally 'lock' a bunch of compute hosts for Climate, which 
can't be serving for other purposes. The current implementation is based 
on a dedicated aggregate (we call it 'freepool') plus a relation table 
in between the hosts and the reservations (so we elect the hosts at the 
lease creation, but we dedicate them to the useron the lease start).

I agree, this is a first naïve implementation, which requires to define 
a certain set of resources for managing dedication of compute hosts. 
Please note that the current implementation for virtual instances is 
really different, where instances are booted at lease creation then 
shelved, and then unshelved at lease start.

> So it feels like as well as users being able to say "This instance must be started on this date"  you also need the other part of the set which is more like the spot instances which the user pays less for on the basis that they will be deleted if needed.     Both types should be managed by Climate not Nova.    Of course Climate would also I think need a way to manage when spot instances go away - it would be a problem to depend on X spot instance being there to  match the capacity of a lease only to find they had been deleted by the user in Nova some time ago, and the capacity now used for something else.

Agreed, we could potentially implement spot instances as well, but it 
occurs to me that's only another option when creating a lease where you 
say that's you're OK if your hosts can be recycled for other users 
before the end of the lease you ask.

Anyway, I'm not fan of having aggregates for managing dedicated hosts 
'lock-in'. I'm wondering if we could tag in Nova the hosts with a 
tenant_id so that it would be read by a scheduler filter. That would 
require to extend the ComputeNode model with a tenant_id IMHO.

Any etherpad we could discuss on the future blueprint ?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140124/fc778fd8/attachment.html>

More information about the OpenStack-dev mailing list