[openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

Thomas Spatzier thomas.spatzier at de.ibm.com
Mon Apr 28 13:41:41 UTC 2014


Excerpts from Steve Baker's message on 28/04/2014 01:25:29:

<snip>
> #1 Enable software components for full lifecycle:
<snip>
> So in a short, stripped-down version, SoftwareConfigs could look like
>
> my_sw_config:
>   type: OS::Heat::SoftwareConfig
>   properties:
>     create_config: # the hook for software install
>     suspend_config: # hook for suspend action
>     resume_config: # hook for resume action
>     delete_config: # hook for delete action
>
<snip>
>
> OS::Heat::SoftwareConfig itself needs to remain ignorant of heat
> lifecycle phases, since it is just a store of config.

Sure, I agree on that. SoftwareConfig is just a store of config that gets
used by another resource which then deals with Heat's lifecycle.
The thing I was proposing is actually not making it lifecycle aware, but it
allows the user to store respective config pieces to later be executed by a
software deployment at respective lifecycle steps.

>
> Currently there are 2 ways to build configs which are lifecycle aware:
> 1. have a config/deployment pair, each with different deployment actions
> 2. have a single config/deployment, and have the config script do
> conditional logic
>    on the derived input value deploy_action
>
> Option 2. seem reasonable for most cases, but having an option which
> maps better to TOSCA would be nice.

So option 2 sounds like the right thing to me. The only things is that I
would not want to put all logic into a large script with conditional
handling, but to allow breaking the script into parts and let the condition
handling be done by the framework. My snippet above would then just allow
for telling the deploy logic which script to call when.
Most of the real work would probably be done in the in-instance tool, so
the Heat resource would really "just" allow for storing data in a
well-defined structure.

>
> Clint's StructuredConfig example would get us most of the way there,
> but a dedicated config resource might be easier to use.

Right, and that's the core of my proposal: having a dedicated config
resource that is intuitive to use for template authors.

> The deployment resource could remain agnostic to the contents of this
> resource though. The right place to handle this on the deployment
> side would be in the orc script 55-heat-config, which could infer
> whether the config was a lifecycle config, then invoke the required
> config based on the value of deploy_action.

Fully agree on that. This should be the place to handle most of the work.
I think we are saying the same thing on this topic, so I am optimistic to
agree on a solution :-)

>
>
> #2 Enable add-hoc actions on software components:
<snip>
>
> Lets park this for now. Maybe one day heat templates will be used to
> represent workflow tasks, but this isn't directly related to software
config.

I think if we get to a good conclusion of #1, maybe this won't be a big
deal after all.
So yeah, maybe park it (but keep in the back of our heads) and look at it
again depending on what the result for #1 looks like.

>
<snip>
> #3.1 software deployment should run just once:
> A bug has been raised because with today's implementation it can happen
> that SoftwareDeployments get executed multiple times. There has been some
> discussion around this issue but no final conclusion. An average user
will
> however assume that his automation gets run only or exactly once. When
> using existing scripts, it would be an additional burden to require
> rewrites to cope with multiple invocations. Therefore, we should have a
> generic solution to the problem so that users do not have to deal with
this
> complex problem.

> I'm with Clint on this one. Heat-engine cannot know the true state
> of a server just by monitoring what has been polled and signaled.
> Since it can't know it would be dangerous for it to guess. Instead
> it should just offer all known configuration data to the server and
> allow the server to make the decision whether to execute a config
> again. I still think one more derived input value would be useful to
> help the server to make that decision. This could either be a
> datestamp for when the derived config was created, or a hash of all
> of the derived config data.

So as I said in another note, I agree that the this seems best handled in
the in-instance tool and the Heat engine, or the resource should probably
not have any new magic. If there is some additional state property that the
resource maintains, and the in-instance tool handles it, that should be
fine. I think what is important, is that users who want to use existing
automation scripts do not have to implement much logic for interpreting
that additional "flag", but that we handle it in the generic hook
invocation logic.

Can you elaborate more on what you have in mind with the additional derived
input value?


>
> #3.2 dependency on heat-cfn-api:
> Some parts of current signaling still depend on the heat-cfn-api. While
> work seems underway to completely move to Heat native signaling, some
> cleanup to make sure this is used throughout the code.

> This is possible for signaling now, by setting signal_transport:
> HEAT_SIGNAL on the deployment resource.
>
> Polling will be possible once this os-collect-config change lands
> and is in a release:
> https://review.openstack.org/#/c/84269/
> Native polling is enabled by setting the server resource property
> software_config_transport: POLL_SERVER_HEAT

>
> #3.3 connectivity of instances to heat engine API:
> The current metadata and signaling framework has certain dependencies on
> connectivity from VMs to the Heat engine API. With some network setups,
and
> in some customer environments we hit limitations of access from VMs to
the
> management server. What can be done to enable additional network setups?

> Some users want to run their servers in isolated neutron networks,
> which means no polling or signaling to heat. To kick off the process
> of finding a solution to this I proposed the following nova blueprint:
> https://review.openstack.org/#/c/88703/
> The nova design session for this didn't make the cut, so I'm keen to
> organize an ad-hoc session with anybody who is interested in this.
> This deserves its own session since there are other non-heat
> stakeholders who might like this too.

Interesting. I will look into it. This could be one candidate for some
discussion at the summit.

>
<snip>
> #3.6 handling of stack updates for software config:
> Stack updates are not cleanly supported with the initial software
> orchestration implementation. #1 above could address this issue, but do
we
> have to do something in addition?

> Updates should work fine currently, but authors may prefer to
> represent update workloads with a lifecycle config described in 1.
>
> However we still have the issue where if a server does a reboot or
> rebuild on a stack update, since a nova reboot or rebuild does not
> map to a heat lifecycle phase. This means we can't attach a
> deployment resource to the shutdown action, so we can't trigger
> quiescing config during reboots or rebuilds (Note that quiescing
> during a server DELETE should work). Clint had a look at this a
> while back, we'll need to pick it up at some point.

Yes, those are exactly the details we have to figure out. So another good
candidate for this design session IMO.

>
> #3.7 stack-abandon and stack-adopt for software-config:
> Issues have been found for stack-abandon and stack-adopt with software
> configs that need to be addressed. Can this be handled by additional
hooks
> as lined out under #1?
>
>

> There is a problem with the way abandon and adopt are currently
> implemented. Servers will continue to poll for metadata from the
> abandoning heat, using abandoned credentials. There needs to be an
> added phase in the abandon/adopt process where the metadata returned
> from abandoning heat will return the endpoints and credentials for
> the adopting heat so that the server can start polling for valid
> metadata again.
>
> This is more of an abandon/adopt issue than a software-config one ;)
> Maybe we can figure out the solution on the beer-track.

Agree, not really a software config specific item, but something that
surfaces with software config. So we can either consider to briefly touch
on it during the session ... or run it on the beer-track ;-)

>
>
> For this design session I have my own list of items to discuss:
> #4.1 Maturing the puppet hook so it can invoke more existing puppet
scripts
> #4.2 Make progress on the chef hook, and defining the mapping from
> chef concepts to heat config/inputs/outputs
> #4.3 Finding volunteers to write hooks for Salt, Ansible
> #5.1 Now that heatclient can include binary files, discuss enhancing
> get_file to zip the directory contents if it is pointed at a directory
> #5.2 Now that heatclient can include binary files, discuss making
> stack create/update API calls multipart/form-data so that proper
> mime data can be captured for attached files
> #6.1 Discuss options for where else metadata could be polled from (ie,
swift)
> #6.2 Discuss whether #6.1 can lead to software-config that can work
> on an OpenStack which doesn't allow admin users or keystone domains
> (ie, rackspace)

#4.1 thru #4.3 are important and seem straight forward and more about
finding people to do it. If there are design issues to be figured out,
maybe we can do it offline via the ML.

#5.1 and #5.2 are really interesting and map to use cases we have also seen
internally. Is there a size limit for the binaries? Would this also cover,
e.g. sending small binaries like a wordpress install tgz instead of doing a
yum based install? Or would the latter be something to address via #6
below?

#6 looks very interesting as well. We also thought about using swift not
only for metadata but also for sharing installables to instances in cases
where direct download from the internet, for example, is not possible.

Regards,
Thomas

> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list