[openstack-dev] [Heat] TOSCA, CAMP, CloudFormation, ???

Clint Byrum clint at fewbar.com
Fri Apr 12 23:38:35 UTC 2013

On 2013-04-12 15:36, Adrian Otto wrote:
> Clint,
> On Apr 12, 2013, at 2:22 PM, Clint Byrum <clint at fewbar.com>
>  wrote:
>> On 2013-04-12 08:45, Adrian Otto wrote:
>>> This proposal will utilize the existing Heat agent. We currently use
>>> SSH keypair injection on the API call to the Cloud Servers API to
>>> bootstrap compute nodes in the simple case. The idea is to leave 
>>> that
>>> open to handle whatever the system Provider modules want to
>>> instrument. Please recognize that we want this DSL to work 
>>> regardless
>>> of what the underlying hardware/cloud infrastructure is. It can work
>>> on your laptop with vagrant, with OpenStack, with a public cloud… 
>>> that
>>> should not matter. The vendor specific implementations all go into 
>>> the
>>> Provider plug-ins. The idea is to enable decentralized 
>>> implementation
>>> of vendor-specific systems, and centralized sharing of best 
>>> practices
>>> for application deployment. Imagine an OpenStack community repo of
>>> Heat Blueprints where everyone can publish their best practices.
>> So what I think you're saying is, Heat would have some intrinsic 
>> providers for compute, object storage, block storage, etc. Users would 
>> somehow be able to define their own providers inside compute servers 
>> via an API, which could then be referenced directly in the DSL? So if 
>> I want memcached, I write a provider definition for it, and somehow 
>> deliver it into the compute node and notify Heat of its presence?
>> The spec is really unclear on how providers are defined (or I'm just 
>> overloaded with specs and can't comprehend it), but I am actually 
>> pretty excited to see a system that allows users to reference and 
>> manage compute-hosted things at the same level as "under the cloud" 
>> resources.
> Yes, you've got the idea. In the case of memcached, you might just
> want to build that up from compute instances, so you might not even
> involve a provider for that, it could simply be specified as a
> Component. That's not any different from what Heat can already do
> today. Here is another use case where Providers can make a lot of
> sense:
> Goal: Deploy my app on my chosen OpenStack based public cloud, using
> Heat for Dev/Test. For Production, deploy to my private OpenStack
> cloud.
> Scenario: My app depends on a MySQL database. My public cloud
> provider has a hosted "mysql" service that is offered through a
> Provider plug-in. It's there automatically because my cloud hosting
> company put it there.  I deploy, and finish my testing on the public
> cloud. I want to go to production now.
> Solution: The Provider gives you a way to abstract the different
> cloud implementations. I establish an equivalent Provider on my
> private OpenStack cloud using RedDwarf. I set up a Provider that
> offers "mysql" in my private cloud. Now the same setup works on both
> clouds, even though the API for my local "mysql" service may actually
> differ from the database provisioning API in the public cloud. Now I
> deploy on my "production" Environment in my private cloud, and it
> works!
> Not to blow your mind too much, but in the use case above, we assume
> that each cloud has its own hosted Heat service that speaks the Open
> API+DSL. You could also use one of the Heat services, and *not* the
> other. You define two Environments. One is for "dev/test" and uses
> Providers on the public cloud. The other is for "production", and it
> uses Providers on the private cloud. Now you can just decide where
> stuff gets provisioned by selecting the appropriate Environment. You
> might decide to just use the hosted Heat service from your public
> cloud, even when you are deploying into your private cloud.

Mind not blown, this makes sense. Providers are an abstraction for 
implementation details.

> If that makes sense, you can take that idea a step further, and
> actually set up Environments that mix both public and private cloud
> infrastructure. Maybe you use provider A for your mission critical
> (persistent, HA) Components, and provider B for nightly batch
> processing jobs. Same Environment. Arbitrary number of clouds.
> Without a concept like Providers, application portability between
> public and private clouds can get a bit more convoluted. You may end
> up doing things like constructing the "mysql" service on top of
> compute nodes using Components for sake of portability, but this can
> cause you to sacrifice performance using a general purpose compute
> instances you find in a typical nova compute service, rather than the
> more performance tuned hosted database service your public cloud
> service may offer you.

I can definitely see where the current template format will suffer from 
portability issues between private/public as they move forward with 
versions of Heat at a different pace.

More information about the OpenStack-dev mailing list