[openstack-dev] Climate - Was: Next steps for Whole Host allocation / Pclouds
Sylvain Bauza
sylvain.bauza at bull.net
Tue Jan 21 13:20:34 UTC 2014
Hi Phil,
Le 21/01/2014 13:13, Day, Phil a écrit :
>> Hi Phil and Jay,
>>
>> Phil, maybe you remember I discussed with you about the possibility of using pclouds with Climate, but we finally ended up using Nova aggregates and a dedicated filter.
>> That works pretty fine. We don't use instance_properties
>> but rather aggregate metadata but the idea remains the same for isolation.
> Sure do, and I had a question around that which has been buzzing in my head for a while now.
>
> I can see how you can use an aggregate as a way of isolating the capacity of some specific hosts (Pclouds was pretty much doing the same thing - it was in effect an abstraction layer to surface aggregates to users), and I can see that you can then plan how to use that capacity against a list of reservations.
>
> It does though seem that you're confined to working on some subset of the physical hosts, which I'd of thought could become quite restrictive in some cases and hard to optimize for capacity. (if for example a user wants to combine reservations with anti-affinity then you'd need to have a larger pool of hosts to work with).
My bad, documentation is missing for Climate and that's something we
plan to deliver right after the 0.1 release arriving this week. Should
you have found documentation for this, you would have seen that Climate
is not only focusing about compute hosts reservations, but rather plans
to deliver any Openstack object (virtual instances and compute hosts
that are implemented for the 0.1 release, or virtual Neutron routers or
Heat stack - to be planned)
The current implementation for compute hosts is done using aggregates,
but that's not the case for virtual instances. Even that said, thanks to
your previous email, I'm thinking about the way getting rid of managing
host aggregates, but rather use hosts directly with possible tagging to
the scheduler. That's still unclear in my mind, but that would have the
good benefit of removing what we call 'freepool', ie. hosts dedicated to
Climate.
In order to ensure capacity planning and be able to certify that Climate
would be able to cope with the reservation asked in the future, one
possible way would be to say "let's define that we will dedicate up to X
percents of our compute hosts to Climate, whatever the compute hosts
are, and be intelligent for selecting the hosts".
Such that way, there is no problem mixing both anti-affinity filter and
Climate filter.
Again, keep in mind Climate scopes to provide reservations not only for
hosts, but also instances, etc.
>
> It sort of feels to me that a significant missing of having a reservation system for Nova is that there is no matching concept within Nova of the opposite of a reservation - a spot instance (i.e an instance which the user gets for a lower price in return for knowing it can be deleted by the system if the capacity is needed for another higher-priority request - e.g. a reservation).
>
> If we had a concept of spot instances in Nova, and the corresponding process to remove them, then the capacity demands of reservations could be balanced by the amount of spot-instance usage in the system (and this would seem a good role for an external controller).
Here, you mix two different Climate concerns : instances and hosts. We
already define in Climate what we call 'best-effort' lease, ie. a
non-certified lease for Climate because of non-matching requirements
(for example, say a user wants 5 hosts while Climate only can give him
4, if the lease would be 'best-effort', Climate would create the lease
with 4 hosts instead of 5, while a "normal" lease would be denied by
Climate). Once the hosts are here, the user can boot as many instances
as he wants.
The virtual instances plugin is also another possible use : provided you
want to provision an instance for a certain amount of time, Nova will
shelve it upon creation, Climate will unshelve it when the lease start
and Climate will destroy it at the lease end.
These both plugins (virtual instances plugin and compute hosts plugin)
need to define what we call a termination statement : what to do when
the lease ends ? With compute hosts, if instances are still running,
should we kill them, or leave them and not put back the host in the
freepool ? All these behaviors should be configuration-driven, so that
the admin would have the choice.
A spot instance in such our system would be the fact that we allow
Climate to free up the resources (kill it or whatever) before the lease
ends, that's another scenario but possible.
>
> I'm wondering if managing spot instances and reservations across the whole of a Nova system wouldn't be a more general use case than having to manage this within a specific aggregate - or am I missing something ?
The main point is that you want to potentially provision other objects
than just instances or hosts, like Cinder volumes. That's why we think
reservations need to be managed by a separate Openstack service. That
said, we personnally think Climate and Nova can interact, at least about
tenancy isolation or scheduling of hosts (but that's Gantt's scope), and
I would be glad to help providing Nova such missing things.
-Sylvain
> Cheers,
> Phil
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list