[heat][tacker] After Stack is Created, will it change nested stack Id?

Zane Bitter zbitter at redhat.com
Thu Feb 6 20:40:59 UTC 2020


Hi Tushar,
Great question.

On 4/02/20 2:53 am, Patil, Tushar wrote:
> Hi All,
> 
> In tacker project, we are using heat API to create stack.
> Consider a case where we want to add OS::Heat::AutoScalingGroup in which there are two servers and the desired capacity is set to 2.

OK, so the scaled unit is a stack containing two servers and two ports.

> So internally heat will create two nested stacks and add following resources to it:-
> 
> child stack1
> VDU1 - OS::Nova::Server
> CP1 - OS::Neutron::Port
> VDU2 - OS::Nova::Server
> CP2- OS::Neutron::Port
> 
> child stack2
> VDU1 - OS::Nova::Server
> CP1 - OS::Neutron::Port
> VDU2 - OS::Nova::Server
> CP2- OS::Neutron::Port

In fact, Heat will create 3 nested stacks - one child stack for the 
AutoScalingGroup that contains two Template resources, which each have a 
grandchild stack (the ones you list above). I'm sure you know this, but 
I mention it because it makes what I'm about to say below clearer.

> Now, as part of tacker heal API, we want to heal VDU1 from child stack2. To do this, we will mark the status of the resources from "child stack2" as unhealthy and then update "child stack2" stack.
> 
> Since VDU1 resource is present in the two nested stacks, I want to keep the nested stack id information in tacker so that after the stack is updated, I can pull physical resource id of the resources from the nested child stack directly.

That's entirely reasonable.

> My question is after the stack is created for the first time, will it ever change the nested child stack id?

Short answer: no.
Long answer: yes ;)

In general normal updates and such will never result in the (grand)child 
stack ID changing. Even if a resource inside the stack fails (so the 
stack gets left in UPDATE_FAILED state), the next update will just try 
to update it again in-place.

Obviously if you scale down the AutoScalingGroup and then scale it back 
up again, you'll end up with the different grandchild stack there.

The only other time it gets replaced is if you use the "mark unhealthy" 
command on the template resource in the child stack (i.e. the 
autoscaling group stack), or on the AutoScalingGroup resource itself in 
the parent stack. If you do this a whole new replacement (grand)child 
stack will get replaced. Marking only the resources within the 
grandchild stack (e.g. VDU1) will *not* cause the stack to be replaced, 
so you should be OK.

In code:
https://opendev.org/openstack/heat/src/branch/master/heat/engine/resources/stack_resource.py#L106-L135

Hope that helps. Feel free to ask if you need more clarification.

cheers,
Zane.




More information about the openstack-discuss mailing list