[openstack-dev] trove and heat integration status

Clint Byrum clint at fewbar.com
Tue Jul 2 22:52:50 UTC 2013


Excerpts from Michael Basnight's message of 2013-07-02 15:17:09 -0700:
> Howdy,
> 
> one of the TC requests for integration of trove was to integrate heat. While this is a small task for single instance installations, when we get into clustering it seems a bit more painful. Id like to submit the following as a place to start the discussion for why we would/wouldnt integrate heat (now). This is, in NO WAY, to say we will not integrate heat. Its just a matter of timing and requirements for our 'soon to be' cluster api. I am, however, targeting getting trove to work in a rpm environment, as it is tied to apt currently.

Hi Michael. I do think that it is very cool that Trove will be making
use of Heat for cluster configuration.

> 
> 1) Companies who are looking at trove are not yet looking at heat, and a hard dependency might stifle growth of the product initially
>     • CERN

I'm sure these users don't explicitly want "MySQL" (or whatever DB
you use) and "RabbitMQ" (or whatever RPC you use) either, but they
are plumbing, and thus things that need to be deployed in the larger
architecture.

> 2) homogeneous LaunchConfiguration
>     • a database cluster is heterogeneous
>     • Our cluster configuration will need to specify different sized slaves, and allow a customer to upgrade a single slaves configuration
>     • heat said if this is something that has a good use case, they could potentially make it happen (not sure of timeframe)

There's no requirement that you use AWS::EC2::AutoScalingGroup or
OS::Heat::InstanceGroup. In fact I find them rather cumbersome and
limited. Since all Heat templates are just data structures (expressed
as yaml or json) you can just maintain an array of instances of the size
that you want.

> 3) have to modify template to scale out
>     • This doable but will require hacking a template in code and pushing that template
>     • I assume removing a slave will require the same finagling of the template
>     • I understand that a better version of this is coming (not sure of timeframe)
> 

The word template makes it sound like it is a text only thing. It is
a data structure, and as such, it is quite easy to modify and maintain
in code.

As an example, lets say you have this as your template for a database cluster:

# in yaml
Resources:
  Master0000ReadyWaitCond:
    Type: AWS::CloudFormation::WaitCondition
    Properties:
      Timeout: 120
  Master0000ReadyWaitCondHandle:
    Type: AWS::CloudFormation::WaitConditionHandle
    Properties:
      WaitConditionName: {Ref: Master0000ReadyWaitCond}
  Master0000:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: reallyfastandlotsofram
      ImageId: trove-mysql
    Metadata:
      ReadyWaitCondition: {Ref: Master0000ReadyWaitCondHandle}


This will boot up, with the reference image, and if you have appropriate
software (such as heat-cfntools or  os-apply-config and os-refresh-config)
in your image it will read the Metadata section and signal back to Heat
when ReadyWaitCond is satisfied. With that, Trove can poll the status
of that resource to find out if the master is ready.

Now, to add a slave:

# in yaml
Resources:
  Master0000ReadyWaitCond:
    Type: AWS::CloudFormation::WaitCondition
    Properties:
      Timeout: 120
  Master0000ReadyWaitCondHandle:
    Type: AWS::CloudFormation::WaitConditionHandle
    Properties:
      WaitConditionName: {Ref: Master0000ReadyWaitCond}
  Master0000:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: reallyfastandlotsofram
      ImageId: trove-mysql
    Metadata:
      ReadyWaitCondition: {Ref: Master0000ReadyWaitCondHandle}
      Users:
        - name: slave0000
        - password: some_random_string
  Slave0000ReadyWaitCond:
    Type: AWS::CloudFormation::WaitCondition
    Properties:
      Timeout: 120
  Slave0000ReadyWaitCondHandle:
    Type: AWS::CloudFormation::WaitConditionHandle
    Properties:
      WaitConditionName: {Ref: Slave0000ReadyWaitCond}
  Slave0000:
    Type: AWS::EC2::Instance
    DependsOn: Master0000ReadyWaitCond
    Properties:
      InstanceType: kindofawesome
      ImageId: trove-mysql
    Metadata:
      Master:
        Address: {Fn::GetAtt: [Master0000, PrivateIp]}
        User:
          name: slave0000
          password: some_random_string
      ReadyWaitCondition: {Ref: SlaveReadyWaitCondHandle}


I hope all of that makes some sense. Eventually yes, resizable arrays
of servers will be in the new format, HOT, but for now, the CFN method
is still useful as you get signals and dependency graph management.



More information about the OpenStack-dev mailing list