[openstack-dev] [heat] convergence cancel messages
Anant Patil
anant.patil at hpe.com
Mon Apr 11 08:51:35 UTC 2016
On 14-Mar-16 14:40, Anant Patil wrote:
> On 24-Feb-16 22:48, Clint Byrum wrote:
>> Excerpts from Anant Patil's message of 2016-02-23 23:08:31 -0800:
>>> Hi,
>>>
>>> I would like the discuss various approaches towards fixing bug
>>> https://launchpad.net/bugs/1533176
>>>
>>> When convergence is on, and if the stack is stuck, there is no way to
>>> cancel the existing request. This feature was not implemented in
>>> convergence, as the user can again issue an update on an in-progress
>>> stack. But if a resource worker is stuck, the new update will wait
>>> for-ever on it and the update will not be effective.
>>>
>>> The solution is to implement cancel request. Since the work for a stack
>>> is distributed among heat engines, the cancel request will not work as
>>> it does in legacy way. Many or all of the heat engines might be running
>>> worker threads to provision a stack.
>>>
>>> I could think of two options which I would like to discuss:
>>>
>>> (a) When a user triggered cancel request is received, set the stack
>>> current traversal to None or something else other than current
>>> traversal. With this the new check-resources/workers will never be
>>> triggered. This is okay as long as the worker(s) is not stuck. The
>>> existing workers will finish running, and no new check-resource
>>> (workers) will be triggered, and it will be a graceful cancel. But the
>>> workers that are stuck will be stuck for-ever till stack times-out. To
>>> take care of such cases, we will have to implement logic of "polling"
>>> the DB at regular intervals (may be at each step() of scheduler task)
>>> and bail out if the current traversal is updated. Basically, each worker
>>> will "poll" the DB to see if the current traversal is still valid and if
>>> not, stop itself. The drawback of this approach is that all the workers
>>> will be hitting the DB and incur a significant overhead. Besides, all
>>> the stack workers irrespective of whether they will be cancelled or not,
>>> will keep on hitting DB. The advantage is that it probably is easier to
>>> implement. Also, if the worker is stuck in particular "step", then this
>>> approach will not work.
>>>
>>> (b) Another approach is to send cancel message to all the heat engines
>>> when one receives a stack cancel request. The idea is to use the thread
>>> group manager in each engine to keep track of threads running for a
>>> stack, and stop the thread group when a cancel message is received. The
>>> advantage is that the messages to cancel stack workers is sent only when
>>> required and there is no other over-head. The draw-back is that the
>>> cancel message is 'broadcasted' to all heat engines, even if they are
>>> not running any workers for the given stack, though, in such cases, it
>>> will be a just no-op for the heat-engine (the message will be gracefully
>>> discarded).
>> Oh hah, I just sent (b) as an option to avoid (a) without really
>> thinking about (b) again.
>>
>> I don't think the cancel broadcasts are all that much of a drawback. I
>> do think you need to rate limit cancels though, or you give users the
>> chance to DDoS the system.
> There is no easier way to restrict the cancels, so I am choosing the
> option of having a "monitoring task" which runs in separate thread. This
> task periodically polls DB to check if the current traversal is updated.
> When a cancel message is received, the current traversal is updated to
> new id and monitoring task will stop the thread group running worker
> threads for previous traversal (traversal uniquely identifies a stack
> operation).
>
> Also, this will help with checking timeout. Currently each worker checks
> for timeout. I can move this to the monitoring thread which will stop
> the thread group when stack times out.
>
> It is better to restrict the actions within the heat engine than to load
> the AMQP; that can lead to potentially complicated issues.
>
> -- Anant
I almost forgot to update this thread.
After lot of ping-pong in my head, I have taken a different approach to
implement stack-update-cancel when convergence is on. Polling for
traversal update in each heat engine worker is not efficient method and
so is the broadcasting method.
In the new implementation, when a stack-cancel-update request is
received, the heat engine worker will immediately cancel eventlets
running locally for the stack. Then it sends cancel messages to only
those heat engines who are working on the stack, one request per engine.
Please review the patch: https://review.openstack.org/#/c/301483/
-- Anant
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list