[openstack-dev] [Climate] Questions and comments

Dina Belova dbelova at mirantis.com
Wed Oct 9 10:34:06 UTC 2013


Mike, I'll try to describe the reservation process for the virtual
reservations. I'll use Nova project as an example.

As I said, this Nova workflow is only the example that may and certainly
will be modified for other 'virtual' projects.

1) User goes to Nova via CLI/Dashboard and commits all usual actions like
he/she wants to boot instance. The only difference is that user passes
reservation-connected hints to Nova. In the CLI this request may look like
the following:

nova  boot --flavor 1 --image bb3979c2-b2e1-4836-abbc-2ee510064718 --hint
reserved=True --hint lease_params='{"name": "lease1", "start": "now",
"end": "2013-12-1 16:07"}' vm1

If scheduling process went OK, we'll see the following by 'nova list'
command:

+--------------------------------------+------+----------+------------+-------------+------------------+
| ID                                   | Name | Status   | Task State |
Power State | Networks         |
+--------------------------------------+------+----------+------------+-------------+------------------+
| a7ac3b2e-dca5-4d21-ab37-cd019a813636 | vm1  | RESERVED | None       |
NOSTATE     | private=10.0.0.3 |
+--------------------------------------+------+----------+------------+-------------+------------------+

2) Request passes up to the Compute Manager, where scheduling process is
already done. If Manager finds reservation related hints it uses Climate
client to create lease using passed to Nova params and id of the VM to be
reserved. Also Nova changes status of VM in its DB to 'RESERVED'. If there
are no reservation related hints filter properties, Nova just spawns
instance as usual.

3) Lease creation request goes to Climate Lease API via Climate Client.
Climate Lease API will be mostly used by other services (like Nova in this
example) and by admin users to manage leases as 'contracts'.

4) Climate Lease API passes lease creation request to Climate Manager
service via RPC. Climate Manager is the service that communicates with all
resource plugins and Climate DB. Climate Manager creates lease record in
DB, all reservation records (for the instance in this case) and all events
records. Even if user passes no additional events (like notifications in
future), at least two events for lease are created - 'start' and 'end'
events.

5) One more function that Manager does is periodical DB polling to find out
if there are any 'UNDONE' event to be processed. If there is such event
(for example, start event for the lease just saved in DB), manager begins
to process it. That means manager sets event status to 'IN_PROGRESS' and
for every reservation in lease commits 'on_start' actions for this
reservation. Now there is one-to-one relationship between lease and
reservation, but we suppose there may be cases for one-to-many
relationship. 'On_start' actions are defined in resource plugin responsible
for this resource type ("virtual:instance") in this example. Plugins are
loaded using stevedore and needed ones are defined in climate.conf file.

6) "virtual:instance" plugin commits on_start actions. For VM it may be
'wake_up' action, that wakes reserved instance up through Nova API. This
may be implemented using Nova extensions mechanism. Wake up action really
spawns this instance.

7) If everything is ok, Manager sets event status to 'DONE' or 'COMPLETED'.

8) Almost the same process is done when Manager gets 'end' event for the
lease from DB.

Thank you for the attention.

Dina


On Wed, Oct 9, 2013 at 1:01 PM, Patrick Petit <patrick.petit at bull.net>wrote:

>  On 10/9/13 6:53 AM, Mike Spreitzer wrote:
>
> Yes, that helps.  Please, guys, do not interpret my questions as
> hostility, I really am just trying to understand.  I think there is some
> overlap between your concerns and mine, and I hope we can work together.
>
> No probs at all. Don't see a sign of hostility at all. Potential
> collaboration and understanding is really how we perceive your
> questions...
>
>
> Sticking to the physical reservations for the moment, let me ask for a
> little more explicit details.  In your outline below, late in the game you
> write "the actual reservation is performed by the lease manager plugin".
>  Is that the point in time when something (the lease manager plugin, in
> fact) decides which hosts will be used to satisfy the reservation?
>
> Yes. The reservation service should return only a Pcloud uuid that is
> empty. The description of host capabilities and extra-specs is only
> defined as metadata of the Pcloud at this point.
>
> Or is that decided up-front when the reservation is made?  I do not
> understand how the lease manager plugin can make this decision on its own,
> isn't the nova scheduler also deciding how to use hosts?  Why isn't there a
> problem due to two independent allocators making allocations of the same
> resources (the system's hosts)?
>
> The way we are designing it excludes race conditions between Nova
> scheduler and the lease manager plugin for host reservations because the
> lease manager plugin will use a private pool of hosts for reservation
> (reservation pool) that is not shared with Nova scheduler. In our view,
> this is not a convenience design artifact but a purpose. It is because what
> we'd like to achieve really is energy efficiency management based on a
> reservation backlog and possibly dynamic management of host resources
> between the reservation pool and the multi-tenant pool. A Climate scheduler
> filter in Nova will do the triage filtering out those hosts that belong to
> the reservation pool and hosts that are reserved in an active lease.
> Another (longer term) goal behind this (was actually the primary
> justification for the reservation pool) is that the lease manager plugin
> could turn machines off to save electricity when the reservation backlog
> allows to do so and consequently turn them back on when a lease kicks in if
> that's needed. We anticipate that the resource management algorithms /
> heuristics behind that behavior is non-trivial but we believe that it would
> be hardly achievable without a reservation backlog and some form of
> capacity management capabilities left open to the provider. In particular,
> things become much trickier when it to comes decide what to do with the
> reserved hosts when a lease ends. We foresee few options:
>
> 1) Forcibly kill the instances running on reserved hosts and move them
> back to the reservation pool for the next lease to come
> 2) Keep the instances running on the reserved hosts and move them to an
> intermediary "recycling pool" until all the instances die at which point in
> time those hosts that are released from duty can return to the reservation
> pool. Case 1 and 2 could optionally be augmented by a grace period
> 3) Keep the instances running on the reserved hosts and move them to the
> multi-tenant pool. Then, it'll be up to the operator to repopulate the
> reservation pool using free hosts. Would require administrative tasks like
> disabling hosts, instance migrations, ... in other words certainly a pain
> if not fully automated.
>
> So, you noticed that all this relies very much on manipulating hosts
> aggregates, metadata and filtering behind the scene. That's one way of
> implementing the whole-host-reservation feature based on the tools we have
> at our disposable today. A substantial refactoring of Nova and scheduler
> could/should be a better way to go? Is it worth it? We don't know. We
> anyway have zero visibility on that.
>
> HTH,
> Patrick
>
> Thanks,
> Mike
>
> Patrick Petit <patrick.petit at bull.net> <patrick.petit at bull.net> wrote on
> 10/07/2013 07:02:36 AM:
>
> > Hi Mike,
> >
> > There are actually more facets to this. Sorry if it's a little
> > confusing :-( Climate's original blueprint https://
> > wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api
> > was about physical host reservation only. The typical use case
> > being: "I want to reserve x number of hosts that match the
> > capabilities expressed in the reservation request". The lease is
> > populated with reservations which at this point are only capacity
> > descriptors. The reservation becomes active only when the lease
> > starts at a specified time and for a specified duration. The lease
> > manager plugin in charge of the physical reservation has a planning
> > of reservations that allows Climate to grant a lease only if the
> > requested capacity is available at that time. Once the lease becomes
> > active, the user can request instances to be created on the reserved
> > hosts using a lease handle as a Nova's scheduler hint. That's
> > basically it. We do not assume or enforce how and by whom (Nova,
> > Heat ,...) a resource instantiation is performed. In other words, a
> > host reservation is like a whole host allocation https://
> > wiki.openstack.org/wiki/WholeHostAllocation that is reserved ahead
> > of time by a tenant in anticipation of some workloads that is bound
> > to happen in the future. Note that while we are primarily targeting
> > hosts reservations the same service should be offered for storage.
> > Now, Mirantis brought in a slew of new use cases that are targeted
> > toward virtual resource reservation as explained earlier by Dina.
> > While architecturally both reservation schemes (physical vs virtual)
> > leverage common components, it is important to understand that they
> > behave differently. For example, Climate exposes an API for the
> > physical resource reservation that the virtual resource reservation
> > doesn't. That's because virtual resources are supposed to be already
> > reserved (through some yet to be created Nova, Heat, Cinder,...
> > extensions) when the lease is created. Things work differently for
> > the physical resource reservation in that the actual reservation is
> > performed by the lease manager plugin not before the lease is
> > created but when the lease becomes active (or some time before
> > depending on the provisioning lead time) and released when the lease
> ends.
> > HTH clarifying things.
> > BR,
> > Patrick
>
>
>
> --
> Patrick Petit
> Cloud Computing Principal Architect, Innovative Products
> Bull, Architect of an Open World TM
> Tél : +33 (0)4 76 29 70 31
> Mobile : +33 (0)6 85 22 06 39http://www.bull.com
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131009/cbab2ec6/attachment.html>


More information about the OpenStack-dev mailing list