[openstack-dev] Climate Incubation Application
Sylvain Bauza
sylvain.bauza at gmail.com
Mon Mar 3 18:13:13 UTC 2014
Hi Joe,
2014-03-03 18:32 GMT+01:00 Joe Gordon <joe.gordon0 at gmail.com>:
>
>
> This sounds like something that belongs in nova, Phil Day has an
> elegant solution for this:
> https://blueprints.launchpad.net/nova/+spec/whole-host-allocation
>
>
This blueprint has already been addressed by Climate team, and we discussed
about this with Phil directly.
This blueprint has been recently abandoned by its author and Phil is trying
to focus on dedicated instances instead.
As we identified this blueprint as non-supported yet, we implemented its
logic directly within Climate. That said, don't confuse 2 different things
:
- the locking process for isolating one compute node to a single tenant :
that should be done in Nova
- the timetable for scheduling hosts and electing which ones are
appropriate : that must be done in Climate (and in the future, should use
Gantt as external scheduler for electing from a pool of available hosts on
that timeframe)
Don't let me say that the resource isolation must be done within Climate :
I'm definitely conviced that this logic should be done on the resource
project level (Nova, Cinder, Neutron) and Climate should use their
respective CLI for asking isolation.
The overall layer for defining what will available when, and what are the
dependencies in between projects, still relies on a shared service, which
is Climate.
>
> Heat?
>
> I spin up dev instances all the time and have never had this problem
> in part because if I forget my quota will remind me.
>
>
How do you ensure that you won't run out of resources when firing up an
instance in 3 days ? How can you guaranttee that in the next couple of
days, you would be able to create a volume with 50GB of space ?
I'm not saying that the current Climate implementation does all the work.
I'm just saying it's duty of Climate to look at Quality of Service aspects
for resource allocation (or say SLA if you prefer)
>
> Why does he need to reserve them in the future? When he wants an
> instance can't he just get one? As Sean said, what happens if someone
> has no free quota when the reservation kicks in?
>
>
That's the role of the resource plugin to manage capacity and ensure
everything can be charged correctly.
Regarding the virtual instances plugin logic, that's something that can be
improved, but consider the thing that the instance is already created but
not spawned when the lease is created, so that the quota is decreased from
one.
With the compute hosts plugin, we manage availability thanks to a resource
planner, based on a fixed set of resources (enrolled compute hosts within
Climate), so we can almost guaranttee this (minus the hosts outages we
could get, of course)
>
> How is this different from 'nova boot?' When nova boot finishes the VM
> is completely ready to be used
>
>
Nova boot directly creates the VM when the command is issued, while the
proposal here is to defer the booting itself only at the lease start (which
can happen far later than when the lease is created)
>
> > - if you're reserving resources far before you'll need them, it'll be
> > cheaper
>
> Why? How does this save a provider money?
>
>
>From a cloud operator point of view, don't you think it's way preferrable
to get feedback for future capacity needs ?
Don't you feel it would be interesting for him to propose a business model
like this ?
>
> "Reserved Instances provide a capacity reservation so that you can
> have confidence in your ability to launch the number of instances you
> have reserved when you need them."
> https://aws.amazon.com/ec2/purchasing-options/reserved-instances/
>
> Amazon does guarantee the resource will be available. Amazon can
> guarantee the resource because they can terminate spot instances at
> will, but OpenStack doesn't have this concept today.
>
>
That's why we think there is a need for guarantteing resource allocation
within Openstack.
Spot instances can be envisaged thanks to Climate as a formal contract for
reserving resources that can be freed if needed.
-Sylvain
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140303/ded3cf19/attachment.html>
More information about the OpenStack-dev
mailing list