[openstack-dev] trove and heat integration status

Clint Byrum clint at fewbar.com
Wed Jul 3 03:17:30 UTC 2013


Excerpts from Michael Basnight's message of 2013-07-02 19:04:01 -0700:
> On Jul 2, 2013, at 3:52 PM, Clint Byrum wrote:
> 
> > Excerpts from Michael Basnight's message of 2013-07-02 15:17:09 -0700:
> >> Howdy,
> >> 
> >> one of the TC requests for integration of trove was to integrate heat. While this is a small task for single instance installations, when we get into clustering it seems a bit more painful. Id like to submit the following as a place to start the discussion for why we would/wouldnt integrate heat (now). This is, in NO WAY, to say we will not integrate heat. Its just a matter of timing and requirements for our 'soon to be' cluster api. I am, however, targeting getting trove to work in a rpm environment, as it is tied to apt currently.
> > 
> > Hi Michael. I do think that it is very cool that Trove will be making
> > use of Heat for cluster configuration.
> 
> I know it really fits the bill!
> 
> > 
> >> 
> >> 1) Companies who are looking at trove are not yet looking at heat, and a hard dependency might stifle growth of the product initially
> >>    • CERN
> > 
> > I'm sure these users don't explicitly want "MySQL" (or whatever DB
> > you use) and "RabbitMQ" (or whatever RPC you use) either, but they
> > are plumbing, and thus things that need to be deployed in the larger
> > architecture.
> 
> Well sure but i also dont want to stop trove from adoption because a company has not investigated heat. Rabbit and the DB are shared resources between all OpenStack services. Heat and Trove are not.
> 

I do understand that. Heat has some growing up to do before it is in the
same category as those other pieces. Please keep us in the loop where
you need features and/or bug fixes for Heat.

> > 
> >> 2) homogeneous LaunchConfiguration
> >>    • a database cluster is heterogeneous
> >>    • Our cluster configuration will need to specify different sized slaves, and allow a customer to upgrade a single slaves configuration
> >>    • heat said if this is something that has a good use case, they could potentially make it happen (not sure of timeframe)
> > 
> > There's no requirement that you use AWS::EC2::AutoScalingGroup or
> > OS::Heat::InstanceGroup. In fact I find them rather cumbersome and
> > limited. Since all Heat templates are just data structures (expressed
> > as yaml or json) you can just maintain an array of instances of the size
> > that you want.
> 
> Oh good!
> 
> > 
> >> 3) have to modify template to scale out
> >>    • This doable but will require hacking a template in code and pushing that template
> >>    • I assume removing a slave will require the same finagling of the template
> >>    • I understand that a better version of this is coming (not sure of timeframe)
> >> 
> > 
> > The word template makes it sound like it is a text only thing. It is
> > a data structure, and as such, it is quite easy to modify and maintain
> > in code.
> > ...
> > I hope all of that makes some sense. Eventually yes, resizable arrays
> > of servers will be in the new format, HOT, but for now, the CFN method
> > is still useful as you get signals and dependency graph management.
> 
> It does, with one caveat. Can i say slave00001 has a flavor of 512m and slave00002 has a flavor 2048m? I didnt see that in the example. Its really useful for a reporting slave to be smaller than a master, and for a particular slave to be larger due to any sort of requirement that i cant necessarily dictate! 

Of course flavor can differ per-server. That is kind of my point, the cfn
template format is fairly low level, making Heat into sort of a really
smart client library for all of OpenStack. So you can really maintain
the list of slaves however you want. You could have ReportingSlave0001
and QuerySlave0002 or just use UUID's for them and give them names
in Metadata.



More information about the OpenStack-dev mailing list