<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 10/9/13 6:53 AM, Mike Spreitzer
wrote:<br>
</div>
<blockquote
cite="mid:OFEE2C6089.9D05456C-ON85257BFF.001ABA9D-85257BFF.001AE7E5@us.ibm.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=ISO-8859-1">
<font face="sans-serif" size="2">Yes, that helps. Please, guys,
do
not interpret my questions as hostility, I really am just trying
to understand.
I think there is some overlap between your concerns and mine,
and
I hope we can work together.</font>
<br>
</blockquote>
No probs at all. Don't see a sign of hostility at all. Potential
collaboration and understanding is really how we perceive your
questions... <br>
<blockquote
cite="mid:OFEE2C6089.9D05456C-ON85257BFF.001ABA9D-85257BFF.001AE7E5@us.ibm.com"
type="cite">
<br>
<font face="sans-serif" size="2">Sticking to the physical
reservations
for the moment, let me ask for a little more explicit details.
In
your outline below, late in the game you write "</font><font
size="3">the
actual reservation is performed by the lease manager plugin</font><font
face="sans-serif" size="2">".
Is that the point in time when something (the lease manager
plugin,
in fact) decides which hosts will be used to satisfy the
reservation?</font></blockquote>
<big><font size="2"><big>Y<font size="2"><big>es. The reservation
service should return only a Pcloud uuid that is empty<font
size="2"><big>. The description of host capabilities and
extra-specs is only defined as meta<font size="2"><big>data
of the Pcloud at this point<font size="2"><big>.</big></font>
</big></font></big></font></big></font></big></font></big><br>
<blockquote
cite="mid:OFEE2C6089.9D05456C-ON85257BFF.001ABA9D-85257BFF.001AE7E5@us.ibm.com"
type="cite"><font face="sans-serif" size="2">Or
is that decided up-front when the reservation is made? I do not
understand
how the lease manager plugin can make this decision on its own,
isn't the
nova scheduler also deciding how to use hosts? Why isn't there
a
problem due to two independent allocators making allocations of
the same
resources (the system's hosts)?</font>
<br>
</blockquote>
The way we are designing it excludes race conditions between Nova
scheduler and the lease manager plugin for host reservations because
the lease manager plugin will use a private pool of hosts for
reservation (reservation pool) that is not shared with Nova
scheduler. In our view, this is not a convenience design artifact
but a purpose. It is because what we'd like to achieve really is
energy efficiency management based on a reservation backlog and
possibly dynamic management of host resources between the
reservation pool and the multi-tenant pool. A Climate scheduler
filter in Nova will do the triage filtering out those hosts that
belong to the reservation pool and hosts that are reserved in an
active lease. Another (longer term) goal behind this (was actually
the primary justification for the reservation pool) is that the
lease manager plugin could turn machines off to save electricity
when the reservation backlog allows to do so and consequently turn
them back on when a lease kicks in if that's needed. We anticipate
that the resource management algorithms / heuristics behind that
behavior is non-trivial but we believe that it would be hardly
achievable without a reservation backlog and some form of capacity
management capabilities left open to the provider. In particular,
things become much trickier when it to comes decide what to do with
the reserved hosts when a lease ends. We foresee few options:<br>
<br>
1) Forcibly kill the instances running on reserved hosts and move
them back to the reservation pool for the next lease to come<br>
2) Keep the instances running on the reserved hosts and move them to
an intermediary "recycling pool" until all the instances die at
which point in time those hosts that are released from duty can
return to the reservation pool. Case 1 and 2 could optionally be
augmented by a grace period<br>
3) Keep the instances running on the reserved hosts and move them to
the multi-tenant pool. Then, it'll be up to the operator to
repopulate the reservation pool using free hosts. Would require
administrative tasks like disabling hosts, instance migrations, ...
in other words certainly a pain if not fully automated.<br>
<br>
So, you noticed that all this relies very much on manipulating hosts
aggregates, metadata and filtering behind the scene. That's one way
of implementing the whole-host-reservation feature based on the
tools we have at our disposable today. A substantial refactoring of
Nova and scheduler could/should be a better way to go? Is it worth
it? We don't know. We anyway have zero visibility on that.<br>
<br>
HTH,<br>
Patrick <br>
<br>
<blockquote
cite="mid:OFEE2C6089.9D05456C-ON85257BFF.001ABA9D-85257BFF.001AE7E5@us.ibm.com"
type="cite"><font face="sans-serif" size="2">Thanks,</font>
<br>
<font face="sans-serif" size="2">Mike</font>
<br>
<br>
<tt><font size="2">Patrick Petit <a class="moz-txt-link-rfc2396E" href="mailto:patrick.petit@bull.net"><patrick.petit@bull.net></a>
wrote
on 10/07/2013 07:02:36 AM:<br>
<br>
> Hi Mike,<br>
> <br>
> There are actually more facets to this. Sorry if it's a
little <br>
> confusing :-( Climate's original blueprint <a class="moz-txt-link-freetext" href="https://">https://</a><br>
>
wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api<br>
> was about physical host reservation only. The typical use
case <br>
> being: "I want to reserve x number of hosts that match
the <br>
> capabilities expressed in the reservation request". The
lease
is <br>
> populated with reservations which at this point are only
capacity
<br>
> descriptors. The reservation becomes active only when the
lease <br>
> starts at a specified time and for a specified duration.
The lease
<br>
> manager plugin in charge of the physical reservation has
a planning
<br>
> of reservations that allows Climate to grant a lease only
if the <br>
> requested capacity is available at that time. Once the
lease becomes<br>
> active, the user can request instances to be created on
the reserved<br>
> hosts using a lease handle as a Nova's scheduler hint.
That's <br>
> basically it. We do not assume or enforce how and by whom
(Nova, <br>
> Heat ,...) a resource instantiation is performed. In
other words,
a <br>
> host reservation is like a whole host allocation <a class="moz-txt-link-freetext" href="https://">https://</a><br>
> wiki.openstack.org/wiki/WholeHostAllocation that is
reserved ahead
<br>
> of time by a tenant in anticipation of some workloads
that is bound
<br>
> to happen in the future. Note that while we are primarily
targeting
<br>
> hosts reservations the same service should be offered for
storage.
<br>
> Now, Mirantis brought in a slew of new use cases that are
targeted
<br>
> toward virtual resource reservation as explained earlier
by Dina.
<br>
> While architecturally both reservation schemes (physical
vs virtual)<br>
> leverage common components, it is important to understand
that they
<br>
> behave differently. For example, Climate exposes an API
for the <br>
> physical resource reservation that the virtual resource
reservation
<br>
> doesn't. That's because virtual resources are supposed to
be already<br>
> reserved (through some yet to be created Nova, Heat,
Cinder,... <br>
> extensions) when the lease is created. Things work
differently for
<br>
> the physical resource reservation in that the actual
reservation is
<br>
> performed by the lease manager plugin not before the
lease is <br>
> created but when the lease becomes active (or some time
before <br>
> depending on the provisioning lead time) and released
when the lease
ends.<br>
> HTH clarifying things.<br>
> BR,<br>
> Patrick <br>
</font></tt>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
<a class="moz-txt-link-freetext" href="http://www.bull.com">http://www.bull.com</a></pre>
</body>
</html>