[openstack-dev] [Heat] HOT Software configuration proposal

Thomas Spatzier thomas.spatzier at de.ibm.com
Thu Oct 24 12:56:37 UTC 2013


Hi all,

maybe a bit off track with respect to latest concrete discussions, but I
noticed the announcement of project "Solum" on openstack-dev.
Maybe this is playing on a different level, but I still see some relation
to all the software orchestration we are having. What are your opinions on
this?

BTW, I just posted a similar short question in reply to the Solum
announcement mail, but some of us have mail filters an might read [Heat]
mail with higher prio, and I was interested in the Heat view.

Cheers,
Thomas

Patrick Petit <patrick.petit at bull.net> wrote on 24.10.2013 12:15:13:
> From: Patrick Petit <patrick.petit at bull.net>
> To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>,
> Date: 24.10.2013 12:18
> Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal
>
> Sorry, I clicked the 'send' button too quickly.
>
> On 10/24/13 11:54 AM, Patrick Petit wrote:
> > Hi Clint,
> > Thank you! I have few replies/questions in-line.
> > Cheers,
> > Patrick
> > On 10/23/13 8:36 PM, Clint Byrum wrote:
> >> Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:
> >>> Dear Steve and All,
> >>>
> >>> If I may add up on this already busy thread to share our experience
> >>> with
> >>> using Heat in large and complex software deployments.
> >>>
> >> Thanks for sharing Patrick, I have a few replies in-line.
> >>
> >>> I work on a project which precisely provides additional value at the
> >>> articulation point between resource orchestration automation and
> >>> configuration management. We rely on Heat and chef-solo respectively
> >>> for
> >>> these base management functions. On top of this, we have developed an
> >>> event-driven workflow to manage the life-cycles of complex software
> >>> stacks which primary purpose is to support middleware components as
> >>> opposed to end-user apps. Our use cases are peculiar in the sense
that
> >>> software setup (install, config, contextualization) is not a one-time
> >>> operation issue but a continuous thing that can happen any time in
> >>> life-span of a stack. Users can deploy (and undeploy) apps long time
> >>> after the stack is created. Auto-scaling may also result in an
> >>> asynchronous apps deployment. More about this latter. The framework
we
> >>> have designed works well for us. It clearly refers to a PaaS-like
> >>> environment which I understand is not the topic of the HOT software
> >>> configuration proposal(s) and that's absolutely fine with us.
However,
> >>> the question for us is whether the separation of software config from
> >>> resources would make our life easier or not. I think the answer is
> >>> definitely yes but at the condition that the DSL extension preserves
> >>> almost everything from the expressiveness of the resource element. In
> >>> practice, I think that a strict separation between resource and
> >>> component will be hard to achieve because we'll always need a little
> >>> bit
> >>> of application's specific in the resources. Take for example the
> >>> case of
> >>> the SecurityGroups. The ports open in a SecurityGroup are application
> >>> specific.
> >>>
> >> Components can only be made up of the things that are common to all
> >> users
> >> of said component. Also components would, if I understand the concept
> >> correctly, just be for things that are at the sub-resource level.
> >> Security groups and open ports would be across multiple resources, and
> >> thus would be separately specified from your app's component (though
it
> >> might be useful to allow components to export static values so that
the
> >> port list can be referred to along with the app component).
> Okay got it. If that's the case then that would work....
> >>
> >>> Then, designing a Chef or Puppet component type may be harder than it
> >>> looks at first glance. Speaking of our use cases we still need a
little
> >>> bit of scripting in the instance's user-data block to setup a working
> >>> chef-solo environment. For example, we run librarian-chef prior to
> >>> starting chef-solo to resolve the cookbook dependencies. A cookbook
can
> >>> present itself as a downloadable tarball but it's not always the
> >>> case. A
> >>> chef component type would have to support getting a cookbook from a
> >>> public or private git repo (maybe subversion), handle situations
where
> >>> there is one cookbook per repo or multiple cookbooks per repo, let
the
> >>> user choose a particular branch or label, provide ssh keys if it's a
> >>> private repo, and so forth. We support all of this scenarios and so
we
> >>> can provide more detailed requirements if needed.
> >>>
> >> Correct me if I'm wrong though, all of those scenarios are just
> >> variations
> >> on standard inputs into chef. So the chef component really just has to
> >> allow a way to feed data to chef.
> >
> That's correct. Boils down to specifying correctly all the constraints
> that apply to deploying a cookbook in an instance from it's component
> description.
> >>
> >>> I am not sure adding component relations like the 'depends-on' would
> >>> really help us since it is the job of config management to handle
> >>> software dependencies. Also, it doesn't address the issue of circular
> >>> dependencies. Circular dependencies occur in complex software stack
> >>> deployments. Example. When we setup a Slum virtual cluster, both the
> >>> head node and compute nodes depend on one another to complete their
> >>> configuration and so they would wait for each other indefinitely if
we
> >>> were to rely on the 'depends-on'. In addition, I think it's critical
to
> >>> distinguish between configuration parameters which are known ahead of
> >>> time, like a db name or user name and password, versus
> >>> contextualization
> >>> parameters which are known after the fact generally when the
> >>> instance is
> >>> created. Typically those contextualization parameters are IP
addresses
> >>> but not only. The fact packages x,y,z have been properly installed
and
> >>> services a,b,c successfully started is contextualization information
> >>> (a.k.a facts) which may be indicative that other components can move
on
> >>> to the next setup stage.
> >>>
> >> The form of contextualization you mention above can be handled by a
> >> slightly more capable wait condition mechanism than we have now. I've
> >> been suggesting that this is the interface that workflow systems
should
> >> use.
> Okay. I am looking forward to see what a more capable wait condition
> framework would look like.
> The risk though is that by wanting to do too much in the boot sequence
> is that we end-up with a spaghetti plate of wait condition relationships
> wired in the template which would make it hard to read and make the
> workflow hard to debug.
> >>
> >>> The case of complex deployments with or without circular
> >>> dependencies is
> >>> typically resolved by making the system converge toward the desirable
> >>> end-state through running idempotent recipes. This is our approach.
The
> >>> first configuration phase handles parametrization which in general
> >>> brings an instance to CREATE_COMPLETE state. A second phase follows
to
> >>> handle contextualization at the stack level. As a matter of fact, a
new
> >>> contextualization should be triggered every time an instance enters
or
> >>> leave the CREATE_COMPLETE state which may happen any time with
> >>> auto-scaling. In that phase, circular dependencies can be resolved
> >>> because all contextualization data can be compiled globally. Notice
> >>> that
> >>> Heat doesn't provide a purpose built resource or service like Chef's
> >>> data-bag for the storage and retrieval of metadata. This a gap which
> >>> IMO
> >>> should be addressed in the proposal. Currently, we use a kludge that
is
> >>> to create a fake AWS::AutoScaling::LaunchConfiguration resource to
> >>> store
> >>> contextualization data in the metadata section of that resource.
> >>>
> >> That is what we use in TripleO as well:
> >>
> >> http://git.openstack.org/cgit/openstack/tripleo-heat-templates/
> tree/overcloud-source.yaml#n143
> >>
> >>
> >> We are not doing any updating of that from within our servers though.
> >> That is an interesting further use of the capability.
> > Right. The problem with that is... that currently it's a kludge ;-)
> > Obscure the readability of the code because used for an unintended
> > purpose.
> >>> Aside from the HOT software configuration proposal(s). There are two
> >>> critical enhancements in Heat that would make software life-cycles
> >>> management much easier. In fact, they are actual blockers for us.
> >>>
> >>> The first one would be to support asynchronous notifications when an
> >>> instance is created or deleted as a result of an auto-scaling
decision.
> >>> As stated earlier, contextualization needs to apply in a stack every
> >>> time a instance enters or leaves the CREATE_COMPLETE state. I am not
> >>> referring to a Ceilometer notification but a Heat notification that
can
> >>> be consumed by a Heat client.
> >>>
> >> I think this fits into something that I want for optimizing
> >> os-collect-config as well (our in-instance Heat-aware agent). That is
> >> a way for us to wait for notification of changes to Metadata without
> >> polling.
> > Interesting... If I understand correctly that's kinda replacement of
> > cfn-hup... Do you have a blueprint pointer or something more specific?
> > While I see the benefits of it, in-instance notifications is not
> > really what we are looking for. We are looking for a notification
> > service that exposes an API whereby listeners can register for Heat
> > notifications. AWS Alarming / CloudFormation has that. Why not
> > Ceilometer / Heat? That would be extremely valuable for those who
> > build PaaS-like solutions above Heat. To say it bluntly, I'd like to
> > suggest we explore ways to integrate Heat with Marconi.
> >>
> >>> The second one would be to support a new type of AWS::IAM::User
> >>> (perhaps
> >>> OS::IAM::User) resource whereby one could pass Keystone credentials
to
> >>> be able to specify Ceilometer alarms based on application's specific
> >>> metrics (a.k.a KPIs).
> >>>
> >> It would likely be OS::Keystone::User, and AFAIK this is on the list
of
> >> de-AWS-ification things.
> > Great! As I said. It's a blocker for us and really would like to see
> > it accepted for icehouse.
> >>
> >>> I hope this is making sense to you and can serve as a basis for
further
> >>> discussions and refinements.
> >>>
> >> Really great feedback Patrick, thanks again for sharing!
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>
>
> --
> Patrick Petit
> Cloud Computing Principal Architect, Innovative Products
> Bull, Architect of an Open World TM
> Tél : +33 (0)4 76 29 70 31
> Mobile : +33 (0)6 85 22 06 39
> http://www.bull.com
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list