[openstack-dev] [Mistral][TaskFlow] Long running actions

Adam Young ayoung at redhat.com
Tue Mar 25 16:12:07 UTC 2014


On 03/21/2014 12:33 AM, W Chan wrote:
> Can the long running task be handled by putting the target task in the 
> workflow in a persisted state until either an event triggers it or 
> timeout occurs?  An event (human approval or trigger from an external 
> system) sent to the transport will rejuvenate the task.  The timeout 
> is configurable by the end user up to a certain time limit set by the 
> mistral admin.
>
> Based on the TaskFlow examples, it seems like the engine instance 
> managing the workflow will be in memory until the flow is completed. 
>  Unless there's other options to schedule tasks in TaskFlow, if we 
> have too many of these workflows with long running tasks, seems like 
> it'll become a memory issue for mistral...
>
Look into the "Trusts" capability of Keystone for Authorization support 
on long running tasks.


>
> On Thu, Mar 20, 2014 at 3:07 PM, Dmitri Zimine <dz at stackstorm.com 
> <mailto:dz at stackstorm.com>> wrote:
>
>
>>     For the 'asynchronous manner' discussion see
>>     http://tinyurl.com/n3v9lt8; I'm still not sure why u would want
>>     to make is_sync/is_async a primitive concept in a workflow
>>     system, shouldn't this be only up to the entity running the
>>     workflow to decide? Why is a task allowed to be sync/async, that
>>     has major side-effects for state-persistence, resumption (and to
>>     me is a incorrect abstraction to provide) and general workflow
>>     execution control, I'd be very careful with this (which is why I
>>     am hesitant to add it without much much more discussion).
>
>     Let's remove the confusion caused by "async". All tasks [may] run
>     async from the engine standpoint, agreed.
>
>     "Long running tasks" - that's it.
>
>     Examples: wait_5_days, run_hadoop_job, take_human_input.
>     The Task doesn't do the job: it delegates to an external system.
>     The flow execution needs to wait  (5 days passed, hadoob job
>     finished with data x, user inputs y), and than continue with the
>     received results.
>
>     The requirement is to survive a restart of any WF component
>     without loosing the state of the long running operation.
>
>     Does TaskFlow already have a way to do it? Or ongoing ideas,
>     considerations? If yes let's review. Else let's brainstorm together.
>
>     I agree,
>>     that has major side-effects for state-persistence, resumption
>>     (and to me is a incorrect abstraction to provide) and general
>>     workflow execution control, I'd be very careful with this
>     But these requirement  comes from customers'  use cases:
>     wait_5_day - lifecycle management workflow, long running external
>     system - Murano requirements, user input - workflow for operation
>     automations with control gate checks, provisions which require
>     'approval' steps, etc.
>
>     DZ>
>
>
>     _______________________________________________
>     OpenStack-dev mailing list
>     OpenStack-dev at lists.openstack.org
>     <mailto:OpenStack-dev at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140325/5e98152e/attachment.html>


More information about the OpenStack-dev mailing list