[openstack-dev] Rackspace Plans Contributions to HEAT

Adrian Otto adrian.otto at rackspace.com
Thu Apr 4 05:37:50 UTC 2013


We want a useful Auto-Scale solution in Openstack, and recognize that Heat has some capability already that could be leveraged. Whether Auto-Scale is offered as a standalone service, by Heat, or (more likely) a combination of both is open for discussion. Duncan's team is focused exclusively on Auto-Scale, whereas our second team is concerned with the first three focus areas I mentioned. So I will look to Duncan to gather input from you on Auto-Scale.

My take:

Auto-scale is a control system. I believe that control systems should be simple, and well coordinated. We should be careful not to end up with multiple different control systems scattered about, especially in a complex system like ours where the controls may actually conflict with each other. Possible solutions include using centralized control, or perhaps well coordinated federated control. Central control is simpler, so I prefer that as a starting point. We could iterate toward something more elaborate as needed.

The central control point may be what you referred to as a "policy engine" and could live in an "Auto-Scale Service" or it could be a feature of Heat. Regardless, we can leverage Heat for workflows  like Adding and Removing Nova instances (recursively, or from an external control point) rather than calling those services directly. Such workflows will typically involve interacting with multiple service APIs. Resist the temptation to bypass Heat in the case of following a workflow. We can make Heat really good for handling that stuff.

Adrian Otto

On Apr 3, 2013, at 8:20 PM, "Alan Kavanagh" <alan.kavanagh at ericsson.com<mailto:alan.kavanagh at ericsson.com>> wrote:

This is good news I have to say. Though I have some questions that got me irked in reading bullet 4 below on Auto-Scale so im going to throw this out for discussion.

Are we now stating that the HEAT will take care of Auto-Scaling of all Openstack Services? If you consider network services managed and deployed under Quantum, and if you have Ceilometer collecting event notifications then some given “policy engine” would know when to call Quantum & || Nova + etc to provision additional resources as needed, you do not need to go through HEAT for this. I do see cases where an application that is deployed via HEAT would make sense, but for services that are inherent and part of the Infrastructure such as LB or FW etc I do not see what HEAT would need to handle the scaling for these services?

Agree that Event Trigger and Notification Events must be added to LB in order for this to Scale accordingly, I am sure this will come soon ;-) in the LBaaS API.


From: Duncan Mcgreggor [mailto:duncan.mcgreggor at RACKSPACE.COM]
Sent: April-02-13 7:04 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Rackspace Plans Contributions to HEAT

On 4/2/13 3:47 PM, Adrian Otto wrote:

Rackspace has resourced two dedicated development teams for the sole purpose of contributing new features and capabilities to OpenStack's HEAT project. We are very excited, and would like to share with you what we plan to design and contribute together with you:

1) Open API & DSL - This allows templates to be agnostic to the underlying cloud and encourages community contribution for the betterment of all users across all cloud providers. We want a solution that does not depend on semantics from any single service provider. We think there is a way for HEAT to work equally well with CloudFormation templates, and a completely open template format as well.

2) Declarative Model - Although CloudFormation Templates were designed to be declarative, in practice the templates are very imperative artifacts (for example, those that embed shell scripts). Templates that are expressed using a declarative approach are compact, simple, and portable between clouds that have different services available. We want the cloud implementation specific details to be handled by modules, not wired into the templates. Declarative modeling encourages broad contribution from the user base to improve the overall community library of available solutions. While modeling may be easy to implement, they are more difficult to expand to support generic cloud portable use cases.

3) Modular Implementation - We want HEAT to be modular in a way that's consistent with the level of modularity offered in Nova, Quantum, Cinder and others where a common, extendable API is offered and a variety of extensions may be added for various back-end services and features. We want to keep the architecture as simple as possible while allowing individual cloud operators to add features and capabilities in a way that keeps templates crisp and portable.

4) Auto-Scale Implementation - The solution will allow deployments to scale up and down dynamically based on demand. We want to design and implement this with you. We have considerable experience and resources to bring with us. We have a dedicated team to contribute solutions here.
Just to clarify the autoscale bit: we are well aware that there is currently autoscale support in Heat right now, and there's no intent (nor desire!) to reimplement any of that, nor throw anything over the wall ;-)

We had some great chats at PyCon with some folks about Heat and are really looking forward to ODS to dive in more deeply and get to know the current status, project priorities, etc. We've been lurking on IRC and started attending the weekly meetings recently.

There does seem to be some missing integration for monitoring and LBaaS (no surprises there for anyone, as that is all currently under active development), and this is where we want to focus our initial efforts. Well, here as well as advocating for consensus around an autoscale API suitable for consumption by integrators/devops/application devs/etc. We've created a blueprint in LP and proposed a session for discussing some of these things (focused on defining where folks think we are with regard to an AS API and where we want to go with that).

I've got a blog post pending with some more thoughts about this, and that should be up soon. I'll reply with a link when it has been published...

OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130404/d0f1d74c/attachment.html>

More information about the OpenStack-dev mailing list