[openstack-dev] [heat] convergence cancel messages

Anant Patil anant.patil at hpe.com
Fri Apr 15 14:58:36 UTC 2016


On 14-Apr-16 23:09, Zane Bitter wrote:
> On 11/04/16 04:51, Anant Patil wrote:
>> On 14-Mar-16 14:40, Anant Patil wrote:
>>> On 24-Feb-16 22:48, Clint Byrum wrote:
>>>> Excerpts from Anant Patil's message of 2016-02-23 23:08:31 -0800:
>>>>> Hi,
>>>>>
>>>>> I would like the discuss various approaches towards fixing bug
>>>>> https://launchpad.net/bugs/1533176
>>>>>
>>>>> When convergence is on, and if the stack is stuck, there is no way to
>>>>> cancel the existing request. This feature was not implemented in
>>>>> convergence, as the user can again issue an update on an in-progress
>>>>> stack. But if a resource worker is stuck, the new update will wait
>>>>> for-ever on it and the update will not be effective.
>>>>>
>>>>> The solution is to implement cancel request. Since the work for a stack
>>>>> is distributed among heat engines, the cancel request will not work as
>>>>> it does in legacy way. Many or all of the heat engines might be running
>>>>> worker threads to provision a stack.
>>>>>
>>>>> I could think of two options which I would like to discuss:
>>>>>
>>>>> (a) When a user triggered cancel request is received, set the stack
>>>>> current traversal to None or something else other than current
>>>>> traversal. With this the new check-resources/workers will never be
>>>>> triggered. This is okay as long as the worker(s) is not stuck. The
>>>>> existing workers will finish running, and no new check-resource
>>>>> (workers) will be triggered, and it will be a graceful cancel.  But the
>>>>> workers that are stuck will be stuck for-ever till stack times-out.  To
>>>>> take care of such cases, we will have to implement logic of "polling"
>>>>> the DB at regular intervals (may be at each step() of scheduler task)
>>>>> and bail out if the current traversal is updated. Basically, each worker
>>>>> will "poll" the DB to see if the current traversal is still valid and if
>>>>> not, stop itself. The drawback of this approach is that all the workers
>>>>> will be hitting the DB and incur a significant overhead.  Besides, all
>>>>> the stack workers irrespective of whether they will be cancelled or not,
>>>>> will keep on hitting DB. The advantage is that it probably is easier to
>>>>> implement. Also, if the worker is stuck in particular "step", then this
>>>>> approach will not work.
>>>>>
>>>>> (b) Another approach is to send cancel message to all the heat engines
>>>>> when one receives a stack cancel request. The idea is to use the thread
>>>>> group manager in each engine to keep track of threads running for a
>>>>> stack, and stop the thread group when a cancel message is received. The
>>>>> advantage is that the messages to cancel stack workers is sent only when
>>>>> required and there is no other over-head. The draw-back is that the
>>>>> cancel message is 'broadcasted' to all heat engines, even if they are
>>>>> not running any workers for the given stack, though, in such cases, it
>>>>> will be a just no-op for the heat-engine (the message will be gracefully
>>>>> discarded).
>>>> Oh hah, I just sent (b) as an option to avoid (a) without really
>>>> thinking about (b) again.
>>>>
>>>> I don't think the cancel broadcasts are all that much of a drawback. I
>>>> do think you need to rate limit cancels though, or you give users the
>>>> chance to DDoS the system.
>>> There is no easier way to restrict the cancels, so I am choosing the
>>> option of having a "monitoring task" which runs in separate thread. This
>>> task periodically polls DB to check if the current traversal is updated.
>>> When a cancel message is received, the current traversal is updated to
>>> new id and monitoring task will stop the thread group running worker
>>> threads for previous traversal (traversal uniquely identifies a stack
>>> operation).
>>>
>>> Also, this will help with checking timeout. Currently each worker checks
>>> for timeout.  I can move this to the monitoring thread which will stop
>>> the thread group when stack times out.
>>>
>>> It is better to restrict the actions within the heat engine than to load
>>> the AMQP; that can lead to potentially complicated issues.
>>>
>>> -- Anant
>> I almost forgot to update this thread.
>>
>> After lot of ping-pong in my head, I have taken a different approach to
>> implement stack-update-cancel when convergence is on. Polling for
>> traversal update in each heat engine worker is not efficient method and
>> so is the broadcasting method.
>>
>> In the new implementation, when a stack-cancel-update request is
>> received, the heat engine worker will immediately cancel eventlets
>> running locally for the stack. Then it sends cancel messages to only
>> those heat engines who are working on the stack, one request per engine.
> 
> I'm concerned that this is forgetting the reason we didn't implement 
> this in convergence in the first place. The purpose of 
> stack-cancel-update is to roll the stack back to its pre-update state, 
> not to unwedge blocked resources.
> 

Yes, we thought this was never needed because we consciously decided
that the concurrent update feature would suffice the needs of user.
Exactly the reason for me to implement this so late. But there were
questions for API compatibility, and what if user really wants to cancel
the update, given that he/she knows the consequence of it?

> The problem with just killing a thread is that the resource gets left in 
> an unknown state. (It's slightly less dangerous if you do it only during 
> sleeps, but still the state is indeterminate.) As a result, we mark all 
> such resources UPDATE_FAILED, and anything (apart from nested stacks) in 
> a FAILED state is liable to be replaced on the next update (straight 
> away in the case of a rollback). That's why in convergence we just let 
> resources run their course rather than cancelling them, and of course we 
> are able to do so because they don't block other operations on the stack 
> until they reach the point of needing to operate on that particular 
> resource.
> 

The eventlet returns after each "step", so it's not that bad, but I do
agree that the resource might not be in a state from where it can
"resume", and hence the update-replace. I acknowledge your concern here,
but I see that the user really knows that the stack is stuck because of
some unexpected failure which heat is not aware of, and wants to cancel
it.

> That leaves the problem of what to do when you _know_ a resource is 
> going to fail, you _want_ to replace it, and you don't want to wait for 
> the stack timeout. (In theory this problem will go away when Phase 2 of 
> convergence is fully implemented, but I agree we need a solution for 
> Phase 1.) Now that we have the mark-unhealthy API,[1] that seems to me 
> like a better candidate for the functionality to stop threads than 
> stack-cancel-update is, since its entire purpose in life is to set a 
> resource into a FAILED state so that it will get replaced on the next 
> stack update.
> 
> So from a user's perspective, they would issue stack-cancel-update to 
> start the rollback, and iff that gets stuck waiting on a resource that 
> is doomed to fail eventually and which they just want to replace, they 
> can issue resource-mark-unhealthy to just stop that resource.
> 

I was thinking of having the rollback optional while cancelling the
update. The user may want to cancel the update and issue a new one, but
not rollback.

> What do you think?
>

I think it is a good idea, but I see that a resource can be marked
unhealthy only after it is done. The cancel update would take care of
in-progress resources gone bad. I really thought the mark-unhealthy and
stack-cancel-update were complementing features than contradictory.

> cheers,
> Zane.
> 
> [1] 
> http://specs.openstack.org/openstack/heat-specs/specs/mitaka/mark-unhealthy.html
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




More information about the OpenStack-dev mailing list