<div dir="ltr"><div><div><div><div>Hi Yathiraj and Tim,<br><br></div>Really appreciate your comments here ;-)<br><br></div>I will prepare some detailed slides or documents before summit and we can have a review then. It would be great if OpenStack can provide "DRS" features.<br>
</div><div><br></div>Thanks,<br><br></div>Jay<br><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-03-01 6:00 GMT+08:00 Tim Hinrichs <span dir="ltr"><<a href="mailto:thinrichs@vmware.com" target="_blank">thinrichs@vmware.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Jay,<br>
<br>
I think the Solver Scheduler is a better fit for your needs than Congress because you know what kinds of constraints and enforcement you want.  I'm not sure this topic deserves an entire design session--maybe just talking a bit at the summit would suffice (I *think* I'll be attending).<br>

<div class=""><br>
Tim<br>
<br>
----- Original Message -----<br>
| From: "Jay Lau" <<a href="mailto:jay.lau.513@gmail.com">jay.lau.513@gmail.com</a>><br>
| To: "OpenStack Development Mailing List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a>><br>
</div><div><div class="h5">| Sent: Wednesday, February 26, 2014 6:30:54 PM<br>
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage<br>
| compute/storage resource<br>
|<br>
|<br>
|<br>
|<br>
|<br>
|<br>
| Hi Tim,<br>
|<br>
| I'm not sure if we can put resource monitor and adjust to<br>
| solver-scheduler (Gantt), but I have proposed this to Gantt design<br>
| [1], you can refer to [1] and search "jay-lau-513".<br>
|<br>
| IMHO, Congress does monitoring and also take actions, but the actions<br>
| seems mainly for adjusting single VM network or storage. It did not<br>
| consider migrating VM according to hypervisor load.<br>
|<br>
| Not sure if this topic deserved to be a design session for the coming<br>
| summit, but I will try to propose.<br>
|<br>
|<br>
|<br>
|<br>
| [1] <a href="https://etherpad.openstack.org/p/icehouse-external-scheduler" target="_blank">https://etherpad.openstack.org/p/icehouse-external-scheduler</a><br>
|<br>
|<br>
|<br>
| Thanks,<br>
|<br>
|<br>
| Jay<br>
|<br>
|<br>
|<br>
| 2014-02-27 1:48 GMT+08:00 Tim Hinrichs < <a href="mailto:thinrichs@vmware.com">thinrichs@vmware.com</a> > :<br>
|<br>
|<br>
| Hi Jay and Sylvain,<br>
|<br>
| The solver-scheduler sounds like a good fit to me as well. It clearly<br>
| provisions resources in accordance with policy. Does it monitor<br>
| those resources and adjust them if the system falls out of<br>
| compliance with the policy?<br>
|<br>
| I mentioned Congress for two reasons. (i) It does monitoring. (ii)<br>
| There was mention of compute, networking, and storage, and I<br>
| couldn't tell if the idea was for policy that spans OS components or<br>
| not. Congress was designed for policies spanning OS components.<br>
|<br>
|<br>
| Tim<br>
|<br>
| ----- Original Message -----<br>
|<br>
| | From: "Jay Lau" < <a href="mailto:jay.lau.513@gmail.com">jay.lau.513@gmail.com</a> ><br>
| | To: "OpenStack Development Mailing List (not for usage questions)"<br>
| | < <a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a> ><br>
|<br>
|<br>
| | Sent: Tuesday, February 25, 2014 10:13:14 PM<br>
| | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal<br>
| | for OpenStack run time policy to manage<br>
| | compute/storage resource<br>
| |<br>
| |<br>
| |<br>
| |<br>
| |<br>
| | Thanks Sylvain and Tim for the great sharing.<br>
| |<br>
| | @Tim, I also go through with Congress and have the same feeling<br>
| | with<br>
| | Sylvai, it is likely that Congress is doing something simliar with<br>
| | Gantt providing a holistic way for deploying. What I want to do is<br>
| | to provide some functions which is very similar with VMWare DRS<br>
| | that<br>
| | can do some adaptive scheduling automatically.<br>
| |<br>
| | @Sylvain, can you please show more detail for what "Pets vs.<br>
| | Cattles<br>
| | analogy" means?<br>
| |<br>
| |<br>
| |<br>
| |<br>
| | 2014-02-26 9:11 GMT+08:00 Sylvain Bauza < <a href="mailto:sylvain.bauza@gmail.com">sylvain.bauza@gmail.com</a> ><br>
| | :<br>
| |<br>
| |<br>
| |<br>
| | Hi Tim,<br>
| |<br>
| |<br>
| | As per I'm reading your design document, it sounds more likely<br>
| | related to something like Solver Scheduler subteam is trying to<br>
| | focus on, ie. intelligent agnostic resources placement on an<br>
| | holistic way [1]<br>
| | IIRC, Jay is more likely talking about adaptive scheduling<br>
| | decisions<br>
| | based on feedback with potential counter-measures that can be done<br>
| | for decreasing load and preserving QoS of nodes.<br>
| |<br>
| |<br>
| | That said, maybe I'm wrong ?<br>
| |<br>
| |<br>
| | [1] <a href="https://blueprints.launchpad.net/nova/+spec/solver-scheduler" target="_blank">https://blueprints.launchpad.net/nova/+spec/solver-scheduler</a><br>
| |<br>
| |<br>
| |<br>
| | 2014-02-26 1:09 GMT+01:00 Tim Hinrichs < <a href="mailto:thinrichs@vmware.com">thinrichs@vmware.com</a> > :<br>
| |<br>
| |<br>
| |<br>
| |<br>
| | Hi Jay,<br>
| |<br>
| | The Congress project aims to handle something similar to your use<br>
| | cases. I just sent a note to the ML with a Congress status update<br>
| | with the tag [Congress]. It includes links to our design docs. Let<br>
| | me know if you have trouble finding it or want to follow up.<br>
| |<br>
| | Tim<br>
| |<br>
| |<br>
| |<br>
| | ----- Original Message -----<br>
| | | From: "Sylvain Bauza" < <a href="mailto:sylvain.bauza@gmail.com">sylvain.bauza@gmail.com</a> ><br>
| | | To: "OpenStack Development Mailing List (not for usage<br>
| | | questions)"<br>
| | | < <a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a> ><br>
| | | Sent: Tuesday, February 25, 2014 3:58:07 PM<br>
| | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A<br>
| | | proposal<br>
| | | for OpenStack run time policy to manage<br>
| | | compute/storage resource<br>
| | |<br>
| | |<br>
| | |<br>
| | | Hi Jay,<br>
| | |<br>
| | |<br>
| | | Currently, the Nova scheduler only acts upon user request (either<br>
| | | live migration or boot an instance). IMHO, that's something Gantt<br>
| | | should scope later on (or at least there could be some space<br>
| | | within<br>
| | | the Scheduler) so that Scheduler would be responsible for<br>
| | | managing<br>
| | | resources on a dynamic way.<br>
| | |<br>
| | |<br>
| | | I'm thinking of the Pets vs. Cattles analogy, and I definitely<br>
| | | think<br>
| | | that Compute resources could be treated like Pets, provided the<br>
| | | Scheduler does a move.<br>
| | |<br>
| | |<br>
| | | -Sylvain<br>
| | |<br>
| | |<br>
| | |<br>
| | | 2014-02-26 0:40 GMT+01:00 Jay Lau < <a href="mailto:jay.lau.513@gmail.com">jay.lau.513@gmail.com</a> > :<br>
| | |<br>
| | |<br>
| | |<br>
| | |<br>
| | | Greetings,<br>
| | |<br>
| | |<br>
| | | Here I want to bring up an old topic here and want to get some<br>
| | | input<br>
| | | from you experts.<br>
| | |<br>
| | |<br>
| | | Currently in nova and cinder, we only have some initial placement<br>
| | | polices to help customer deploy VM instance or create volume<br>
| | | storage<br>
| | | to a specified host, but after the VM or the volume was created,<br>
| | | there was no policy to monitor the hypervisors or the storage<br>
| | | servers to take some actions in the following case:<br>
| | |<br>
| | |<br>
| | | 1) Load Balance Policy: If the load of one server is too heavy,<br>
| | | then<br>
| | | probably we need to migrate some VMs from high load servers to<br>
| | | some<br>
| | | idle servers automatically to make sure the system resource usage<br>
| | | can be balanced.<br>
| | |<br>
| | | 2) HA Policy: If one server get down for some hardware failure or<br>
| | | whatever reasons, there is no policy to make sure the VMs can be<br>
| | | evacuated or live migrated (Make sure migrate the VM before<br>
| | | server<br>
| | | goes down) to other available servers to make sure customer<br>
| | | applications will not be affect too much.<br>
| | |<br>
| | | 3) Energy Saving Policy: If a single host load is lower than<br>
| | | configured threshold, then low down the frequency of the CPU to<br>
| | | save<br>
| | | energy; otherwise, increase the CPU frequency. If the average<br>
| | | load<br>
| | | is lower than configured threshold, then shutdown some<br>
| | | hypervisors<br>
| | | to save energy; otherwise, power on some hypervisors to load<br>
| | | balance. Before power off a hypervisor host, the energy policy<br>
| | | need<br>
| | | to live migrate all VMs on the hypervisor to other available<br>
| | | hypervisors; After Power on a hypervisor host, the Load Balance<br>
| | | Policy will help live migrate some VMs to the new powered<br>
| | | hypervisor.<br>
| | |<br>
| | | 4) Customized Policy: Customer can also define some customized<br>
| | | policies based on their specified requirement.<br>
| | |<br>
| | | 5) Some run-time policies for block storage or even network.<br>
| | |<br>
| | |<br>
| | |<br>
| | | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there<br>
| | | indeed many customers want such features.<br>
| | |<br>
| | |<br>
| | |<br>
| | | I have filed a bp here [1] long ago, but after some discussion<br>
| | | with<br>
| | | Russell, we think that this should not belong to nova but other<br>
| | | projects. Till now, I did not find a good place where we can put<br>
| | | this in, can any of you show some comments?<br>
| | |<br>
| | |<br>
| | |<br>
| | | [1]<br>
| | | <a href="https://blueprints.launchpad.net/nova/+spec/resource-optimization-service" target="_blank">https://blueprints.launchpad.net/nova/+spec/resource-optimization-service</a><br>
| | |<br>
| | | --<br>
| | |<br>
| | |<br>
| | | Thanks,<br>
| | |<br>
| | | Jay<br>
| | |<br>
| | | _______________________________________________<br>
| | | OpenStack-dev mailing list<br>
| | | <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| | | <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
| | |<br>
| | |<br>
| | |<br>
| | | _______________________________________________<br>
| | | OpenStack-dev mailing list<br>
| | | <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| | | <a href="https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=XDB3hT4WE2iDrNVK0sQ8qKooX2r1T4E%2BVHek3GREhnE%3D%0A&s=e2346cd017c9d8108c12a101892492e2ac75953e4a5ea5c17394c775cf086d7f" target="_blank">https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=XDB3hT4WE2iDrNVK0sQ8qKooX2r1T4E%2BVHek3GREhnE%3D%0A&s=e2346cd017c9d8108c12a101892492e2ac75953e4a5ea5c17394c775cf086d7f</a><br>

| |<br>
| |<br>
| | |<br>
| |<br>
| | _______________________________________________<br>
| | OpenStack-dev mailing list<br>
| | <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| | <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
| |<br>
| |<br>
| | _______________________________________________<br>
| | OpenStack-dev mailing list<br>
| | <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| | <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
| |<br>
| |<br>
| |<br>
| |<br>
| | --<br>
| |<br>
| |<br>
| | Thanks,<br>
| |<br>
| | Jay<br>
| |<br>
| | _______________________________________________<br>
| | OpenStack-dev mailing list<br>
| | <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| | <a href="https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=E45fhmBClHHPExheGdRk0z%2Bj72gQAP4Zc1W3XElJx60%3D%0A&s=684cee6930f5d74f56e1ab9fc42e5f3c2511f07948f357040ca2dc175c4ccee6" target="_blank">https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=E45fhmBClHHPExheGdRk0z%2Bj72gQAP4Zc1W3XElJx60%3D%0A&s=684cee6930f5d74f56e1ab9fc42e5f3c2511f07948f357040ca2dc175c4ccee6</a><br>

|<br>
|<br>
| |<br>
|<br>
| _______________________________________________<br>
| OpenStack-dev mailing list<br>
| <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
|<br>
|<br>
|<br>
| --<br>
|<br>
|<br>
| Thanks,<br>
|<br>
| Jay<br>
|<br>
| _______________________________________________<br>
| OpenStack-dev mailing list<br>
| <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
</div></div>| <a href="https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=oXtNDrtlvCX0d%2BN7rnJXx5e3YjkX%2FGQHVTRP%2BN7hKrw%3D%0A&s=40f6f376ab93e166f327385661db38cf88ca6f2563fbc003cd94d81bf596f9c4" target="_blank">https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=oXtNDrtlvCX0d%2BN7rnJXx5e3YjkX%2FGQHVTRP%2BN7hKrw%3D%0A&s=40f6f376ab93e166f327385661db38cf88ca6f2563fbc003cd94d81bf596f9c4</a><br>

<div class="HOEnZb"><div class="h5">|<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div>Thanks,<br><br></div>Jay<br></div>
</div>