<div dir="ltr">Thanks Senhua.<div><br></div><div>For those of you with more interest in our work, we have a technical report <a href="http://www.netdb.cis.upenn.edu/papers/tropic_tr.pdf">http://www.netdb.cis.upenn.edu/papers/tropic_tr.pdf</a> with more details on HA, concurrency control, and evaluation.</div>
</div><div class="gmail_extra"><br clear="all"><div><div><br></div><div>Thanks</div><div><br></div><div>Changbin<br></div></div>
<br><br><div class="gmail_quote">On Wed, May 1, 2013 at 7:46 PM, Senhua Huang (senhuang) <span dir="ltr"><<a href="mailto:senhuang@cisco.com" target="_blank">senhuang@cisco.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">
Hi Changbin,
<div><br>
</div>
<div>+1 on your work reported on the paper, especially on the performance evaluation! </div>
<div>It is great to see efforts on transactional resource orchestration are coming together. I can see there are many common characteristics shared between your work and Y! + NTT Data's work. </div>
<div><br>
</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Senhua</div><div><div class="h5">
<div>
<div><br>
<div>
<div>On May 1, 2013, at 8:51 AM, Changbin Liu <<a href="mailto:changbin.liu@gmail.com" target="_blank">changbin.liu@gmail.com</a>> wrote:</div>
<br>
<blockquote type="cite">
<div dir="ltr">Hi Joshua,
<div><br>
</div>
<div>First, +1 on your documents and code!</div>
<div><br>
</div>
<div>My name is Changbin Liu, from AT&T Labs Research. Together with Yun Mao (also from AT&T), we once worked on a research prototype cloud controller which supports "transactional" cloud resource orchestration (e.g., state rollback, concurrency, consistency
issues, etc), and we used ZooKeeper to provide HA and serve as state store. I believe your work shares many interesting similarities with ours, and we would be very happy to join your efforts here to improve Nova state management.</div>
<div><br>
</div>
<div>FYI: our work is documented in a paper: <a href="https://www.usenix.org/system/files/conference/atc12/atc12-final41_0.pdf" target="_blank">https://www.usenix.org/system/files/conference/atc12/atc12-final41_0.pdf</a> </div>
<div><br>
</div>
<div>
<div>Please let us know when we can set up meetings to discuss more.</div>
<div><br>
</div>
</div>
</div>
<div class="gmail_extra"><br clear="all">
<div>
<div><br>
</div>
<div>Thanks</div>
<div><br>
</div>
<div>Changbin<br>
</div>
</div>
<br>
<br>
<div class="gmail_quote">On Fri, Apr 26, 2013 at 11:10 PM, Joshua Harlow <span dir="ltr">
<<a href="mailto:harlowja@yahoo-inc.com" target="_blank">harlowja@yahoo-inc.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Great to hear all the encouraging feedback!<br>
<br>
Since likely people like to see code I thought I'd just throw out some<br>
pointers to what the prototype is doing.<br>
<br>
- <a href="https://github.com/yahoo/NovaOrc/blob/master/nova/orc/manager.py#L187" target="_blank">
https://github.com/yahoo/NovaOrc/blob/master/nova/orc/manager.py#L187</a><br>
(new manager, could be part of conductor, TBD)<br>
- Workflow to fulfill the create request @<br>
<a href="https://github.com/yahoo/NovaOrc/blob/master/nova/orc/manager.py#L216" target="_blank">https://github.com/yahoo/NovaOrc/blob/master/nova/orc/manager.py#L216</a><br>
- Refactored run_instance states/plugins @<br>
<a href="https://github.com/yahoo/NovaOrc/tree/master/nova/orc/states" target="_blank">https://github.com/yahoo/NovaOrc/tree/master/nova/orc/states</a><br>
- Potential pieces of new workflow library @<br>
<a href="https://github.com/yahoo/NovaOrc/blob/master/nova/orc/states/__init__.py" target="_blank">https://github.com/yahoo/NovaOrc/blob/master/nova/orc/states/__init__.py</a><br>
<br>
Work is in progress to add the zookeeper part in (which will vastly<br>
increase HA, concurrency and add things like distributed resumption).<br>
--- Related to discussion @<br>
<a href="http://lists.openstack.org/pipermail/openstack-dev/2013-April/007881.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/2013-April/007881.html</a><br>
<br>
For those that are interested we are hoping to figure out the coordination<br>
and inter/intra team+project direction shortly (the harder problem IMHO).<br>
<br>
More details & code to be landed soon!<br>
<div>
<div><br>
On 4/26/13 3:14 PM, "Patil, Tushar" <<a href="mailto:Tushar.Patil@nttdata.com" target="_blank">Tushar.Patil@nttdata.com</a>> wrote:<br>
<br>
>+1 on all the work done by Joshua and Rohit so far.<br>
>I think in a very highly scalable solutions like Nova, we need a<br>
>framework to resume/retry/rollback various stages of vm in a structured<br>
>way.<br>
>I can see this is driving in that direction.<br>
><br>
>- Tushar<br>
><br>
>>-----Original Message-----<br>
>>From: Mike Wilson [mailto:<a href="mailto:geekinutah@gmail.com" target="_blank">geekinutah@gmail.com</a>]<br>
>>Sent: Friday, April 26, 2013 8:30 AM<br>
>>To: OpenStack Development Mailing List<br>
>>Subject: Re: [openstack-dev] Nova workflow management update<br>
>><br>
>>Very exciting to see this happening. Structured state management is<br>
>>sorely needed. I also like the plan of attack, tackling creation of an<br>
>>instance first is a good way to feel this one out.<br>
>><br>
>>-Mike<br>
>><br>
>><br>
>>On Fri, Apr 26, 2013 at 9:21 AM, Senhua Huang (senhuang)<br>
>><<a href="mailto:senhuang@cisco.com" target="_blank">senhuang@cisco.com</a>> wrote:<br>
>><br>
>><br>
>> +1 on the great work on initiating the design and implementation<br>
>>of a more structured and "manageable" provision manager.<br>
>> +1 on the documentation.<br>
>><br>
>> Having a state management separated from resource selection is<br>
>>very helpful for group scheduling, cross compute/storage/network<br>
>>scheduling, as well as improved migration solution. It makes Nova more<br>
>>modular, easier to track and reasoning about errors, and easier to<br>
>>scale.<br>
>><br>
>> Thanks,<br>
>> Senhua<br>
>><br>
>><br>
>> From: <Karajgi>, Rohit <<a href="mailto:Rohit.Karajgi@nttdata.com" target="_blank">Rohit.Karajgi@nttdata.com</a>><br>
>> Reply-To: OpenStack Development Mailing List <openstack-<br>
>><a href="mailto:dev@lists.openstack.org" target="_blank">dev@lists.openstack.org</a>><br>
>> Date: Thursday, April 25, 2013 11:07 PM<br>
>> To: OpenStack Development Mailing List <openstack-<br>
>><a href="mailto:dev@lists.openstack.org" target="_blank">dev@lists.openstack.org</a>><br>
>><br>
>> Subject: Re: [openstack-dev] Nova workflow management update<br>
>><br>
>><br>
>><br>
>> +1x on the really well written plan and wikis.<br>
>><br>
>><br>
>><br>
>> It just goes to show how important having a Structured State<br>
>>management is to Nova<br>
>><br>
>> for making it highly reliable and resilient across all APIs.<br>
>><br>
>><br>
>><br>
>> We would eventually want to see Nova become SuperNova!! J<br>
>><br>
>><br>
>><br>
>> Regards,<br>
>><br>
>> Rohit<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> From: Adrian Otto [mailto:<a href="mailto:adrian.otto@rackspace.com" target="_blank">adrian.otto@rackspace.com</a>]<br>
>> Sent: Friday, April 26, 2013 9:28 AM<br>
>> To: OpenStack Development Mailing List<br>
>> Cc: OpenStack Development Mailing List<br>
>> Subject: Re: [openstack-dev] Nova workflow management update<br>
>><br>
>><br>
>><br>
>> Joshua,<br>
>><br>
>><br>
>><br>
>> I'm one of the Rackers helping to add development resources to<br>
>>Convection. I also work on the OASIS CAMP TC as an editor. I want to<br>
>>help create a reusable task system that helps Nova, Heat, and many other<br>
>>OpenStack projects. I am happy to see this progressing. The recent<br>
>>collaboration among the Nova and Heat/Convection teams is very<br>
>>encouraging. Thanks for all your efforts to form a sensible written<br>
>>plan.<br>
>><br>
>><br>
>><br>
>> I'd like to take an editorial pass through the<br>
>>StructuredStateManagement wiki, and help tighten up the definitions a<br>
>>bit. Some wording changes may avoid some of the more overloaded<br>
>>technical terms (but there are practically no pure ones left). I'm<br>
>>planning to make a few edits to the wiki page (so there will be diffs),<br>
>>but if another approach is preferred, I'm open to that too. Please<br>
>>advise.<br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>><br>
>><br>
>> Adrian<br>
>><br>
>><br>
>> On Apr 25, 2013, at 5:16 PM, "Joshua Harlow" <harlowja@yahoo-<br>
>><a href="http://inc.com/" target="_blank">inc.com</a>> wrote:<br>
>><br>
>> Since I wanted to make sure everyone was aware of this,<br>
>>since some of you might have missed the summit session and I'd like<br>
>>discussions so we can land code in havana.<br>
>><br>
>><br>
>><br>
>> For those that missed the session & associated material.<br>
>><br>
>><br>
>><br>
>> - <a href="https://etherpad.openstack.org/the-future-of-orch" target="_blank">
https://etherpad.openstack.org/the-future-of-orch</a><br>
>>(session details + discussion ...)<br>
>><br>
>><br>
>><br>
>> The summary of what I am trying to do is to move nova away<br>
>>from having ad-hoc tasks and move it toward having a central entity (not<br>
>>a single entity, but a central one, one that can be horizontally<br>
>>scalable) which can execute these tasks on-behalf of nova-compute. This<br>
>>central entity (a new orchestrator or conductor...) would centrally<br>
>>manage<br>
>>the workflow that nova goes through when completing an API request and<br>
>>would do so in a organized, controlled and resumable manner (it would<br>
>>also support rollbacks and more...). The reasons why what exists<br>
>>currently<br>
>>may not be optimal/good are listed in that etherpad, so I won't repeat<br>
>>them here.<br>
>><br>
>><br>
>><br>
>> For example this is a possible diagram for the run_instance<br>
>>'workflow' under this new scheme: <a href="http://imgur.com/sYOVz5X" target="_blank">
http://imgur.com/sYOVz5X</a><br>
>><br>
>><br>
>><br>
>> Nttdata and y! have been pursuing how to refactor this in a<br>
>>well thought out design, and even have prototype code @<br>
>><a href="https://github.com/Yahoo/NovaOrc" target="_blank">https://github.com/Yahoo/NovaOrc</a> which has some of these changes (see<br>
>>the last 4-10 commits). The prototype was shown in the session but feel<br>
>>free to check out the code, if you setup with that code - its based on<br>
>>stable/grizzly, it should run (note that no external api changes<br>
>>occurred).<br>
>><br>
>><br>
>><br>
>> Some of the outcomes of that meeting I received that are<br>
>>relevant here:<br>
>><br>
>><br>
>><br>
>> - HEAT may have a convection library (WIP -<br>
>><a href="https://wiki.openstack.org/wiki/Convection" target="_blank">https://wiki.openstack.org/wiki/Convection</a><br>
>><<a href="https://wiki.openstack.org/wiki/Convection" target="_blank">https://wiki.openstack.org/wiki/Convection</a>> ) that this workflow<br>
>>restructuring can use.<br>
>><br>
>> --- Note: If this code is created quickly (creating a solid<br>
>>core) then it seems like we can use this code in nova itself and start<br>
>>restructuring nova into using this code. This of course then allows HEAT<br>
>>to use said library also, and nova as well (and likely creates future<br>
>>capabilities for something like <a href="http://aws.amazon.com/swf" target="_blank">
http://aws.amazon.com/swf</a>). The talk<br>
>>about this I think is just being started, but it seems like a solid core<br>
>>can be created in a week or two.<br>
>><br>
>> --- The documentation for my attempt at what I would like<br>
>>this central library to do where put @<br>
>><a href="https://etherpad.openstack.org/task-system" target="_blank">https://etherpad.openstack.org/task-system</a> (thx for the heat team for<br>
>>starting that pad)<br>
>><br>
>> - There was an ask to document more the overall design and<br>
>>how to accomplish it. I have started this @<br>
>><a href="https://wiki.openstack.org/wiki/StructuredStateManagement" target="_blank">https://wiki.openstack.org/wiki/StructuredStateManagement</a> (input is<br>
>>welcome)<br>
>><br>
>> --- More details are at<br>
>><a href="https://wiki.openstack.org/wiki/StructuredStateManagementDetails" target="_blank">https://wiki.openstack.org/wiki/StructuredStateManagementDetails</a> (WIP)<br>
>>since I didn't want to clutter the main page up...<br>
>><br>
>> --- Other thoughts of mine at<br>
>><a href="http://lists.openstack.org/pipermail/openstack-dev/2013-" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/2013-</a><br>
>>April/007881.html (with other code associated with it)<br>
>><br>
>> - There was an ask on how conductor fits into this picture,<br>
>>this is still being worked out and discussed (thoughts welcome!)<br>
>><br>
>> - There was talk about how live migration/resizing can take<br>
>>advantage of such a workflow like system to become more secure (details<br>
>>on another email)<br>
>><br>
>> --- This one involves planning, where imho i would like<br>
>>nova/heat groups to focus on this core library, and when adjusting the<br>
>>live migration/resize path they should use said core library. If not a<br>
>>core library then the prototype code I have created above (along with<br>
>>nttdata) can be altered to focus on those paths instead of the initial<br>
>>prototype path of 'run_instance'.<br>
>><br>
>> - More blueprints - I have started a few @<br>
>><a href="https://wiki.openstack.org/wiki/StructuredStateManagement#Blueprints" target="_blank">https://wiki.openstack.org/wiki/StructuredStateManagement#Blueprints</a><br>
>><br>
>> - Make a plan on how to get this into mainline, started<br>
>>this @<br>
>><a href="https://wiki.openstack.org/wiki/StructuredStateManagement#Plan_of_record" target="_blank">https://wiki.openstack.org/wiki/StructuredStateManagement#Plan_of_record</a><br>
>><br>
>><br>
>><br>
>> Discussion is always welcome! I believe we can make this<br>
>>happen (and in all honesty must make it happen).<br>
>><br>
>><br>
>><br>
>> I know there are others interested in this idea/solution,<br>
>>so if they want to chime in that would be wonderful :-)<br>
>><br>
>><br>
>><br>
>> -Josh<br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> OpenStack-dev mailing list<br>
>> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
>> <a href="http://lists.openstack.org/cgi-" target="_blank">http://lists.openstack.org/cgi-</a><br>
>>bin/mailman/listinfo/openstack-dev<br>
>><br>
>><br>
>> __________________________________________________________________<br>
>>____<br>
>> Disclaimer:This email and any attachments are sent in strictest<br>
>>confidence for the sole use of the addressee and may contain legally<br>
>>privileged, confidential, and proprietary data. If you are not the<br>
>>intended recipient, please advise the sender by replying promptly to<br>
>>this email and then delete and destroy this email and any attachments<br>
>>without any further use, copying or forwarding<br>
>><br>
>><br>
>> _______________________________________________<br>
>> OpenStack-dev mailing list<br>
>> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>><br>
>><br>
>><br>
><br>
>______________________________________________________________________<br>
>Disclaimer:This email and any attachments are sent in strictest<br>
>confidence for the sole use of the addressee and may contain legally<br>
>privileged, confidential, and proprietary data. If you are not the<br>
>intended recipient, please advise the sender by replying promptly to this<br>
>email and then delete and destroy this email and any attachments without<br>
>any further use, copying or forwarding<br>
><br>
>_______________________________________________<br>
>OpenStack-dev mailing list<br>
><a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div></div></div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>