[openstack-dev] Climate Incubation Application

Joe Gordon joe.gordon0 at gmail.com
Tue Mar 4 18:50:13 UTC 2014


On Tue, Mar 4, 2014 at 5:25 AM, Dina Belova <dbelova at mirantis.com> wrote:
> Joe, thanks for discussion.
>
>
>> I think nova should natively support booting an instance for a
>
>> limited amount of time. I would use this all the time to boot up
>
>> devstack instances (boot devstack instance for 5 hours)
>
>
> Really nice idea, but to provide time based resource management for any
> resource type in OS (instance, volume, compute host, Heat stack, etc.) that
> needs to be implemented in every project. And even with that feature
> implemented, without central leasing service, there should be some other
> reservation connected opportunities like user notifications about close end
> of lease / energy efficiency, etc. that do not really fit idea of some
> already existing project / program.
>

So I understand the use case where I want a instance for x amount of
time, because the cloud model makes compute resources (instances)
ephemeral. But volumes and object storage are explicitly persistent,
so not sure why you would want to consume one of those resources for a
finite amount of time.

>
>> Reserved and Spot Instances. I like Amazon's concept of reserved and
>
>> spot instances it would be cool if we could support something similar
>
>
> AWS reserved instances look like your first idea with instances booted for a
> limited amount of time - even that in Amazon use case that's *much* time. As
> for spot instances, I believe this idea is more about some billing service
> that counts current instance/host/whatever price due to current compute
> capacity load, etc.

Actually you have it backwards.
"Reserved Instances are easy to use and require no change to how you
use EC2. When computing your bill, our system will automatically apply
Reserved Instance rates first to minimize your costs. An instance hour
will only be charged at the On-Demand rate when your total quantity of
instances running that hour exceeds the number of applicable Reserved
Instances you own."
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/


https://aws.amazon.com/ec2/purchasing-options/spot-instances/


>
>
>> Boot an instances for 4 hours every morning. This sounds like
>
>> something that
>> https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron
>
>> can handle.
>
>
> That's not really thing we've implemented in Climate - we have not
> implemented periodic tasks like that - now lease might be not started,
> started and ended - without any 'sleeping' periods. Although, that's quite
> nice idea to implement this feature using Mistral.
>
>
>> Give someone 100 CPU hours per time period of quota. Support quotas
>
>> by overall usage not current usage. This sounds like something that
>
>> each service should support natively.
>
>
> Quotas (if we speak about time management) should be satisfied in any time
> period. Now in Climate that's done by getting cloud resources from common
> pool at the lease creation moment - but, as you guess, that does not allow
> to have "resource reusage" at the time lease has not started yet. To
> implement resource reusage advanced quota management is truly needed. That
> idea was the first at the very beginning of Climate project and we
> definitely need that in future.

This is the crux of my concern:  without "'resource reusage' at the
time lease has not started yet." I don't see what climate provides.

How would climate handle quotas? Currently quotas are up to each
project to manage.

>
>
>> Reserved Volume: Not sure how that works.
>
>
> Now we're in the process of investigating this moment too. Ideally that
> should be some kind of volume state, that simply means only DB record
> without real block storage created - and it'll be created only at the lease
> start date. But that requires many changes to Cinder. Other idea is to do
> the same as Climate does with compute hosts - consider cinder-volumes as
> dedicated to Climate and Climate will manage them itself. Reserved volume
> idea came from thoughts about 'reserved stack' - to have working group like
> vm+volume+assigned_ip time you really need that.
>

I would like to see a clear roadmap for this with input from the
Cinder team. Because I am not sure if this really makes much sense.

>
>> Virtual Private Cloud.  It would be great to see OpenStack support a
>
>> hardware isolated virtual private cloud, but not sure what the best
>
>> way to implement that is.
>
>
> There was proposal with pclouds by Phil Day, that was changed after Icehouse
> summit to something new. First idea was to use exactly pclouds, but as they
> are not implemented now, Climate works directly with hosts aggregates to
> imitate them. In future, when we'll have opportunity to use pcloud (it does
> not matter how it'll be called really), we'll do it, of course.
>

That brings up another point, having a project that imports nova code
directly is bad. You are using non-public non-contractual APIs that
nova can change at any time.
http://git.openstack.org/cgit/stackforge/climate-nova/tree/climatenova/api/extensions/reservation.py

Having a nova filter that lives in climate
(http://git.openstack.org/cgit/stackforge/climate-nova/tree/climatenova/scheduler/filters/climate_filter.py)
is a no go from nova's point of view. We make no guarantees not to
break your code (We are really good at unintentionally breaking things
too).

>
>> Capacity Planning. Sure, but it would be nice to see a more fleshed
>
>> out story for it.
>
>
> Sure. I believe, that having resource reusage opportunity (when lease
> creation and resource allocation steps won't be the same one) will help to
> manage future capacity peak loads - because cloud provider will know about
> future user needs before resources will be really used.


If Climate is trying to help with Capacity Planning, that is not a
very comprehensive answer.

>
>
> Cheers
>
> Dina
>
>
>
> On Tue, Mar 4, 2014 at 12:30 AM, Joe Gordon <joe.gordon0 at gmail.com> wrote:
>>
>> Overall I think Climate is trying to address some very real use cases,
>> but its unclear to me where these solutions should live or how to
>> solve them. Furthermore I understand what a reservation means for nova
>> but I am not sure what it means in Cinder, Swift etc.
>>
>> To give a few examples:
>> * I think nova should natively support booting an instance for a
>> limited amount of time. I would use this all the time to boot up
>> devstack instances (boot devstack instance for 5 hours)
>> * Reserved and Spot Instances. I like Amazon's concept of reserved and
>> spot instances it would be cool if we could support something similar
>> * Boot an instances for 4 hours every morning. This sounds like
>> something that
>> https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron
>> can handle.
>> * Give someone 100 CPU hours per time period of quota. Support quotas
>> by overall usage not current usage. This sounds like something that
>> each service should support natively.
>> * Reserved Volume: Not sure how that works.
>> * Virtual Private Cloud.  It would be great to see OpenStack support a
>> hardware isolated virtual private cloud, but not sure what the best
>> way to implement that is.
>> * Capacity Planning. Sure, but it would be nice to see a more fleshed
>> out story for it.
>>
>>
>> It would be nice to see more of these use cases discussed.
>>
>>
>> On Mon, Mar 3, 2014 at 11:16 AM, Joe Gordon <joe.gordon0 at gmail.com> wrote:
>> > On Mon, Mar 3, 2014 at 10:43 AM, Sean Dague <sean at dague.net> wrote:
>> >> On 03/03/2014 01:35 PM, Joe Gordon wrote:
>> >>> On Mon, Mar 3, 2014 at 10:01 AM, Zane Bitter <zbitter at redhat.com>
>> >>> wrote:
>> >>>> On 03/03/14 12:32, Joe Gordon wrote:
>> >>>>>>
>> >>>>>>> - if you're reserving resources far before you'll need them, it'll
>> >>>>>>> be
>> >>>>>>> cheaper
>> >>>>>
>> >>>>> Why? How does this save a provider money?
>> >>>>
>> >>>>
>> >>>> If an operator has zero information about the level of future demand,
>> >>>> they
>> >>>> will have to spend a *lot* of money on excess capacity or risk
>> >>>> running out.
>> >>>> If an operator has perfect information about future demand, then they
>> >>>> need
>> >>>> spend no money on excess capacity. Everywhere in between, the amount
>> >>>> of
>> >>>> extra money they need to spend is a non-increasing function of the
>> >>>> amount of
>> >>>> information they have. QED.
>> >>>
>> >>> Sure. if an operator has perfect information about future demand they
>> >>> won't need any excess capacity. But assuming you know some future
>> >>> demand, how do you figure out how much of the future demand you know?
>> >>> But sure I can see this as a potential money saver, but unclear by how
>> >>> much. The Amazon model for this is a reservation is at minimum a year,
>> >>> I am not sure how useful short term reservations would be in
>> >>> determining future demand.
>> >>
>> >> There are other useful things with reservations though. In a private
>> >> context the classic one is running number for close of business. Or
>> >> software team that's working towards a release might want to
>> >> preallocate
>> >> resources for longer scale runs during a particular week.
>> >
>> > Why can't they pre-allocate now?
>> >
>> >>
>> >> Reservation can really be about global policy giving some tenants more
>> >> priority in getting resources than others (because you pre-allocate
>> >> them).
>> >>
>> >> I also know that with a lot of the HPC teams using OpenStack, this is a
>> >> fundamental part of scheduling. Not just the when, but the how long.
>> >> Having systems automatically get reaped after a certain amount of time
>> >> is something they very much want.
>> >
>> > Agreed, I think this should either be part of Nova or Heat directly.
>> >
>> >>
>> >> So I think the general idea have merrit. I just think we need to make
>> >> sure it integrates well with the rest of OpenStack, which I believe
>> >> means strong coupling to the scheduler.
>> >>
>> >>         -Sean
>> >>
>> >> --
>> >> Sean Dague
>> >> Samsung Research America
>> >> sean at dague.net / sean.dague at samsung.com
>> >> http://dague.net
>> >>
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list