<div dir="ltr"><div><div><div>Hi Tim,<br><br></div> I'm not sure if we can put resource monitor and adjust to solver-scheduler (Gantt), but I have proposed this to Gantt design [1], you can refer to [1] and search "jay-lau-513".<br>
<br></div>IMHO, Congress does monitoring and also take actions, but the actions seems mainly for adjusting single VM network or storage. It did not consider migrating VM according to hypervisor load.<br><br></div>Not sure if this topic deserved to be a design session for the coming summit, but I will try to propose.<br>
<div><div><div><br>[1] <a href="https://etherpad.openstack.org/p/icehouse-external-scheduler">https://etherpad.openstack.org/p/icehouse-external-scheduler</a><br><div class="gmail_extra"><br></div><div class="gmail_extra">
Thanks,<br><br></div><div class="gmail_extra">Jay<br></div><div class="gmail_extra"><br><div class="gmail_quote">2014-02-27 1:48 GMT+08:00 Tim Hinrichs <span dir="ltr"><<a href="mailto:thinrichs@vmware.com" target="_blank">thinrichs@vmware.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Jay and Sylvain,<br>
<br>
The solver-scheduler sounds like a good fit to me as well.  It clearly provisions resources in accordance with policy.  Does it monitor those resources and adjust them if the system falls out of compliance with the policy?<br>

<br>
I mentioned Congress for two reasons. (i) It does monitoring.  (ii) There was mention of compute, networking, and storage, and I couldn't tell if the idea was for policy that spans OS components or not.  Congress was designed for policies spanning OS components.<br>

<div class=""><br>
Tim<br>
<br>
----- Original Message -----<br>
</div><div class="">| From: "Jay Lau" <<a href="mailto:jay.lau.513@gmail.com">jay.lau.513@gmail.com</a>><br>
| To: "OpenStack Development Mailing List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a>><br>
</div><div><div class="h5">| Sent: Tuesday, February 25, 2014 10:13:14 PM<br>
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage<br>
| compute/storage resource<br>
|<br>
|<br>
|<br>
|<br>
|<br>
| Thanks Sylvain and Tim for the great sharing.<br>
|<br>
| @Tim, I also go through with Congress and have the same feeling with<br>
| Sylvai, it is likely that Congress is doing something simliar with<br>
| Gantt providing a holistic way for deploying. What I want to do is<br>
| to provide some functions which is very similar with VMWare DRS that<br>
| can do some adaptive scheduling automatically.<br>
|<br>
| @Sylvain, can you please show more detail for what "Pets vs. Cattles<br>
| analogy" means?<br>
|<br>
|<br>
|<br>
|<br>
| 2014-02-26 9:11 GMT+08:00 Sylvain Bauza < <a href="mailto:sylvain.bauza@gmail.com">sylvain.bauza@gmail.com</a> > :<br>
|<br>
|<br>
|<br>
| Hi Tim,<br>
|<br>
|<br>
| As per I'm reading your design document, it sounds more likely<br>
| related to something like Solver Scheduler subteam is trying to<br>
| focus on, ie. intelligent agnostic resources placement on an<br>
| holistic way [1]<br>
| IIRC, Jay is more likely talking about adaptive scheduling decisions<br>
| based on feedback with potential counter-measures that can be done<br>
| for decreasing load and preserving QoS of nodes.<br>
|<br>
|<br>
| That said, maybe I'm wrong ?<br>
|<br>
|<br>
| [1] <a href="https://blueprints.launchpad.net/nova/+spec/solver-scheduler" target="_blank">https://blueprints.launchpad.net/nova/+spec/solver-scheduler</a><br>
|<br>
|<br>
|<br>
| 2014-02-26 1:09 GMT+01:00 Tim Hinrichs < <a href="mailto:thinrichs@vmware.com">thinrichs@vmware.com</a> > :<br>
|<br>
|<br>
|<br>
|<br>
| Hi Jay,<br>
|<br>
| The Congress project aims to handle something similar to your use<br>
| cases. I just sent a note to the ML with a Congress status update<br>
| with the tag [Congress]. It includes links to our design docs. Let<br>
| me know if you have trouble finding it or want to follow up.<br>
|<br>
| Tim<br>
|<br>
|<br>
|<br>
| ----- Original Message -----<br>
| | From: "Sylvain Bauza" < <a href="mailto:sylvain.bauza@gmail.com">sylvain.bauza@gmail.com</a> ><br>
| | To: "OpenStack Development Mailing List (not for usage questions)"<br>
| | < <a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a> ><br>
| | Sent: Tuesday, February 25, 2014 3:58:07 PM<br>
| | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal<br>
| | for OpenStack run time policy to manage<br>
| | compute/storage resource<br>
| |<br>
| |<br>
| |<br>
| | Hi Jay,<br>
| |<br>
| |<br>
| | Currently, the Nova scheduler only acts upon user request (either<br>
| | live migration or boot an instance). IMHO, that's something Gantt<br>
| | should scope later on (or at least there could be some space within<br>
| | the Scheduler) so that Scheduler would be responsible for managing<br>
| | resources on a dynamic way.<br>
| |<br>
| |<br>
| | I'm thinking of the Pets vs. Cattles analogy, and I definitely<br>
| | think<br>
| | that Compute resources could be treated like Pets, provided the<br>
| | Scheduler does a move.<br>
| |<br>
| |<br>
| | -Sylvain<br>
| |<br>
| |<br>
| |<br>
| | 2014-02-26 0:40 GMT+01:00 Jay Lau < <a href="mailto:jay.lau.513@gmail.com">jay.lau.513@gmail.com</a> > :<br>
| |<br>
| |<br>
| |<br>
| |<br>
| | Greetings,<br>
| |<br>
| |<br>
| | Here I want to bring up an old topic here and want to get some<br>
| | input<br>
| | from you experts.<br>
| |<br>
| |<br>
| | Currently in nova and cinder, we only have some initial placement<br>
| | polices to help customer deploy VM instance or create volume<br>
| | storage<br>
| | to a specified host, but after the VM or the volume was created,<br>
| | there was no policy to monitor the hypervisors or the storage<br>
| | servers to take some actions in the following case:<br>
| |<br>
| |<br>
| | 1) Load Balance Policy: If the load of one server is too heavy,<br>
| | then<br>
| | probably we need to migrate some VMs from high load servers to some<br>
| | idle servers automatically to make sure the system resource usage<br>
| | can be balanced.<br>
| |<br>
| | 2) HA Policy: If one server get down for some hardware failure or<br>
| | whatever reasons, there is no policy to make sure the VMs can be<br>
| | evacuated or live migrated (Make sure migrate the VM before server<br>
| | goes down) to other available servers to make sure customer<br>
| | applications will not be affect too much.<br>
| |<br>
| | 3) Energy Saving Policy: If a single host load is lower than<br>
| | configured threshold, then low down the frequency of the CPU to<br>
| | save<br>
| | energy; otherwise, increase the CPU frequency. If the average load<br>
| | is lower than configured threshold, then shutdown some hypervisors<br>
| | to save energy; otherwise, power on some hypervisors to load<br>
| | balance. Before power off a hypervisor host, the energy policy need<br>
| | to live migrate all VMs on the hypervisor to other available<br>
| | hypervisors; After Power on a hypervisor host, the Load Balance<br>
| | Policy will help live migrate some VMs to the new powered<br>
| | hypervisor.<br>
| |<br>
| | 4) Customized Policy: Customer can also define some customized<br>
| | policies based on their specified requirement.<br>
| |<br>
| | 5) Some run-time policies for block storage or even network.<br>
| |<br>
| |<br>
| |<br>
| | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there<br>
| | indeed many customers want such features.<br>
| |<br>
| |<br>
| |<br>
| | I have filed a bp here [1] long ago, but after some discussion with<br>
| | Russell, we think that this should not belong to nova but other<br>
| | projects. Till now, I did not find a good place where we can put<br>
| | this in, can any of you show some comments?<br>
| |<br>
| |<br>
| |<br>
| | [1]<br>
| | <a href="https://blueprints.launchpad.net/nova/+spec/resource-optimization-service" target="_blank">https://blueprints.launchpad.net/nova/+spec/resource-optimization-service</a><br>
| |<br>
| | --<br>
| |<br>
| |<br>
| | Thanks,<br>
| |<br>
| | Jay<br>
| |<br>
| | _______________________________________________<br>
| | OpenStack-dev mailing list<br>
| | <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| | <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
| |<br>
| |<br>
| |<br>
| | _______________________________________________<br>
| | OpenStack-dev mailing list<br>
| | <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| | <a href="https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=XDB3hT4WE2iDrNVK0sQ8qKooX2r1T4E%2BVHek3GREhnE%3D%0A&s=e2346cd017c9d8108c12a101892492e2ac75953e4a5ea5c17394c775cf086d7f" target="_blank">https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=XDB3hT4WE2iDrNVK0sQ8qKooX2r1T4E%2BVHek3GREhnE%3D%0A&s=e2346cd017c9d8108c12a101892492e2ac75953e4a5ea5c17394c775cf086d7f</a><br>

|<br>
|<br>
| |<br>
|<br>
| _______________________________________________<br>
| OpenStack-dev mailing list<br>
| <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
|<br>
|<br>
| _______________________________________________<br>
| OpenStack-dev mailing list<br>
| <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
| <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
|<br>
|<br>
|<br>
|<br>
| --<br>
|<br>
|<br>
| Thanks,<br>
|<br>
| Jay<br>
|<br>
| _______________________________________________<br>
| OpenStack-dev mailing list<br>
| <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
</div></div>| <a href="https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=E45fhmBClHHPExheGdRk0z%2Bj72gQAP4Zc1W3XElJx60%3D%0A&s=684cee6930f5d74f56e1ab9fc42e5f3c2511f07948f357040ca2dc175c4ccee6" target="_blank">https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=E45fhmBClHHPExheGdRk0z%2Bj72gQAP4Zc1W3XElJx60%3D%0A&s=684cee6930f5d74f56e1ab9fc42e5f3c2511f07948f357040ca2dc175c4ccee6</a><br>

<div class=""><div class="h5">|<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div>Thanks,<br><br></div>Jay<br></div>
</div></div></div></div></div>