[openstack-dev] [Heat] rough draft of Heat autoscaling API

Zane Bitter zbitter at redhat.com
Mon Nov 18 09:00:47 UTC 2013


On 17/11/13 21:57, Steve Baker wrote:
> On 11/15/2013 05:19 AM, Christopher Armstrong wrote:
>> http://docs.heatautoscale.apiary.io/
>>
>> I've thrown together a rough sketch of the proposed API for
>> autoscaling. It's written in API-Blueprint format (which is a simple
>> subset of Markdown) and provides schemas for inputs and outputs using
>> JSON-Schema. The source document is currently
>> at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>>
> Apologies if I'm about to re-litigate an old argument, but...
>
> At summit we discussed creating a new endpoint (and new pythonclient)
> for autoscaling. Instead I think the autoscaling API could just be added
> to the existing heat-api endpoint.

-1

> Arguments for just making auto scaling part of heat api include:
> * Significantly less development, packaging and deployment configuration
> of not creating a heat-autoscaling-api and python-autoscalingclient

Having a separate endpoint does not necessarily mean creating 
heat-autoscaling-api. We can have two endpoints in the keystone catalog 
pointing to the same API process. I always imagined that this would be 
the first step.

It doesn't necessarily require a python-scalingclient either, although I 
would lean toward having one.

> * Autoscaling is orchestration (for some definition of orchestration) so
> belongs in the orchestration service endpoint
> * The autoscaling API includes heat template snippets, so a heat service
> is a required dependency for deployers anyway
> * End-users are still free to use the autoscaling portion of the heat
> API without necessarily being aware of (or directly using) heat
> templates and stacks
> * It seems acceptable for single endpoints to manage many resources (eg,
> the increasingly disparate list of resources available via the neutron API)
>
> Arguments for making a new auto scaling api include:
> * Autoscaling is not orchestration (for some narrower definition of
> orchestration)
> * Autoscaling implementation will be handled by something other than
> heat engine (I have assumed the opposite)
> (no doubt this list will be added to in this thread)
>
> What do you think?

I support a separate endpoint because it gives us more options in the 
future. We may well reach a point where we decide that autoscaling 
belongs in a separate project (not program), but that option is 
foreclosed to us if we combine it in the same endpoint. Personally I 
think it would be great if we could eventually reduce the coupling 
between autoscaling and Heat to the point where that would be possible.

IMO we should also be giving providers the flexibility to deploy only 
autoscaling publicly, and only deploy Heat for internal access (i.e. by 
services like autoscaling, Trove, Savanna, &c.)

In short, we live in an uncertain world and more options for the future 
beats fewer options in the future. The cost of keeping these options 
open does not appear high to me.

cheers,
Zane.




More information about the OpenStack-dev mailing list