[openstack-dev] Nova workflow management update

Joshua Harlow harlowja at yahoo-inc.com
Fri Apr 26 00:08:14 UTC 2013

Since I wanted to make sure everyone was aware of this, since some of you might have missed the summit session and I'd like discussions so we can land code in havana.

For those that missed the session & associated material.

- https://etherpad.openstack.org/the-future-of-orch (session details + discussion …)

The summary of what I am trying to do is to move nova away from having ad-hoc tasks and move it toward having a central entity (not a single entity, but a central one, one that can be horizontally scalable) which can execute these tasks on-behalf of nova-compute. This central entity (a new orchestrator or conductor…) would centrally manage the workflow that nova goes through when completing an API request and would do so in a organized, controlled and resumable manner (it would also support rollbacks and more…). The reasons why what exists currently may not be optimal/good are listed in that etherpad, so I won't repeat them here.

For example this is a possible diagram for the run_instance 'workflow' under this new scheme: http://imgur.com/sYOVz5X

Nttdata and y! have been pursuing how to refactor this in a well thought out design, and even have prototype code @ https://github.com/Yahoo/NovaOrc which has some of these changes (see the last 4-10 commits). The prototype was shown in the session but feel free to check out the code, if you setup with that code – its based on stable/grizzly, it should run (note that no external api changes occurred).

Some of the outcomes of that meeting I received that are relevant here:

- HEAT may have a convection library (WIP - https://wiki.openstack.org/wiki/Convection) that this workflow restructuring can use.
--- Note: If this code is created quickly (creating a solid core) then it seems like we can use this code in nova itself and start restructuring nova into using this code. This of course then allows HEAT to use said library also, and nova as well (and likely creates future capabilities for something like http://aws.amazon.com/swf). The talk about this I think is just being started, but it seems like a solid core can be created in a week or two.
--- The documentation for my attempt at what I would like this central library to do where put @ https://etherpad.openstack.org/task-system (thx for the heat team for starting that pad)
- There was an ask to document more the overall design and how to accomplish it. I have started this @ https://wiki.openstack.org/wiki/StructuredStateManagement (input is welcome)
--- More details are at https://wiki.openstack.org/wiki/StructuredStateManagementDetails (WIP) since I didn't want to clutter the main page up…
--- Other thoughts of mine at http://lists.openstack.org/pipermail/openstack-dev/2013-April/007881.html (with other code associated with it)
- There was an ask on how conductor fits into this picture, this is still being worked out and discussed (thoughts welcome!)
- There was talk about how live migration/resizing can take advantage of such a workflow like system to become more secure (details on another email)
--- This one involves planning, where imho i would like nova/heat groups to focus on this core library, and when adjusting the live migration/resize path they should use said core library. If not a core library then the prototype code I have created above (along with nttdata) can be altered to focus on those paths instead of the initial prototype path of 'run_instance'.
- More blueprints – I have started a few @ https://wiki.openstack.org/wiki/StructuredStateManagement#Blueprints
- Make a plan on how to get this into mainline, started this @ https://wiki.openstack.org/wiki/StructuredStateManagement#Plan_of_record

Discussion is always welcome! I believe we can make this happen (and in all honesty must make it happen).

I know there are others interested in this idea/solution, so if they want to chime in that would be wonderful :-)


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130426/a6f87dd2/attachment.html>

More information about the OpenStack-dev mailing list