[openstack-dev] [Heat] Where to keep data about stack breakpoints?

Tomas Sedovic tsedovic at redhat.com
Tue Jan 13 17:08:58 UTC 2015


On 01/12/2015 07:05 PM, Steven Hardy wrote:
> On Mon, Jan 12, 2015 at 04:29:15PM +0100, Tomas Sedovic wrote:
>> Hey folks,
>>
>> I did a quick proof of concept for a part of the Stack Breakpoint spec[1]
>> and I put the "does this resource have a breakpoint" flag into the metadata
>> of the resource:
>>
>> https://review.openstack.org/#/c/146123/
>>
>> I'm not sure where this info really belongs, though. It does sound like
>> metadata to me (plus we don't have to change the database schema that way),
>> but can we use it for breakpoints etc., too? Or is metadata strictly for
>> Heat users and not for engine-specific stuff?
>
> Metadata is supposed to be for template defined metadata (with the notable
> exception of server resources where we merge SoftwareDeployment metadata in
> to that defined in the template).
>
> So if we're going to use the metadata template interface as a way to define
> the breakpoint, this is OK, but do we want to mix the definition of the
> stack with this flow control data? (I personally think probably not).
>
> I can think of a couple of alternatives:
>
> 1. Use resource_data, which is intended for per-resource internal data, and
> set it based on API data passed on create/update (see Resource.data_set)
>
> 2. Store the breakpoint metadata in the environment

Ooh, I forgot about resource_data! That sounds perfect, actually.

>
> I think the environment may be the best option, but we'll have to work out
> how to best represent a tree of nested stacks (something the spec interface
> description doesn't consider AFAICS).

I think we have two orthogonal questions here:

1. How do end users set up and clear breakpoints
2. How does the engine stores breakpoint-related data

As per the spec (and it makes perfect sense to me), users will declare 
breakpoints via the environment and through CLI (which as you say can be 
translated to the environment).

But we ran then read that and just store "has_breakpoint" in each 
resource's data.

The spec does mention breakpoints on nested stacks briefly:

     > For nested stack, the breakpoint would be prefixed with
     > the name of the nested template.

I'm assuming we'll need some sort of separator, but the general idea 
sounds okay to me. Something like this, perhaps:

     nested_stack/nested_template.yaml/SomeResource


>
> If we use the environment, then no additional API interfaces are needed,
> just supporting a new key in the existing data, and python-heatclient can
> take care of translating any CLI --breakpoint argument into environment
> data.
>
>> I also had a chat with Steve Hardy and he suggested adding a STOPPED state
>> to the stack (this isn't in the spec). While not strictly necessary to
>> implement the spec, this would help people figure out that the stack has
>> reached a breakpoint instead of just waiting on a resource that takes a long
>> time to finish (the heat-engine log and event-list still show that a
>> breakpoint was reached but I'd like to have it in stack-list and
>> resource-list, too).
>>
>> It makes more sense to me to call it PAUSED (we're not completely stopping
>> the stack creation after all, just pausing it for a bit), I'll let Steve
>> explain why that's not the right choice :-).
>
> So, I've not got strong opinions on the name, it's more the workflow:
>
> 1. User triggers a stack create/update
> 2. Heat walks the graph, hits a breakpoint and stops.
> 3. Heat explicitly triggers continuation of the create/update
>
> My argument is that (3) is always a stack update, either a PUT or PATCH
> update, e.g we _are_ completely stopping stack creation, then a user can
> choose to re-start it (either with the same or a different definition).
>
> So, it _is_ really an end state, as a user might never choose to update
> from the stopped state, in which case *_STOPPED makes more sense.
>
> Paused implies the same action as the PATCH update, only we trigger
> continuation of the operation from the point we reached via some sort of
> user signal.
>
> If we actually pause an in-progress action via the scheduler, we'd have to
> start worrying about stuff like token expiry, hitting timeouts, resilience
> to engine restarts, etc, etc.  So forcing an explicit update seems simpler
> to me.
>
> Steve
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list