[openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

Tim Hinrichs thinrichs at vmware.com
Fri Feb 28 22:00:26 UTC 2014


Hi Jay,

I think the Solver Scheduler is a better fit for your needs than Congress because you know what kinds of constraints and enforcement you want.  I'm not sure this topic deserves an entire design session--maybe just talking a bit at the summit would suffice (I *think* I'll be attending).

Tim

----- Original Message -----
| From: "Jay Lau" <jay.lau.513 at gmail.com>
| To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
| Sent: Wednesday, February 26, 2014 6:30:54 PM
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage
| compute/storage resource
| 
| 
| 
| 
| 
| 
| Hi Tim,
| 
| I'm not sure if we can put resource monitor and adjust to
| solver-scheduler (Gantt), but I have proposed this to Gantt design
| [1], you can refer to [1] and search "jay-lau-513".
| 
| IMHO, Congress does monitoring and also take actions, but the actions
| seems mainly for adjusting single VM network or storage. It did not
| consider migrating VM according to hypervisor load.
| 
| Not sure if this topic deserved to be a design session for the coming
| summit, but I will try to propose.
| 
| 
| 
| 
| [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
| 
| 
| 
| Thanks,
| 
| 
| Jay
| 
| 
| 
| 2014-02-27 1:48 GMT+08:00 Tim Hinrichs < thinrichs at vmware.com > :
| 
| 
| Hi Jay and Sylvain,
| 
| The solver-scheduler sounds like a good fit to me as well. It clearly
| provisions resources in accordance with policy. Does it monitor
| those resources and adjust them if the system falls out of
| compliance with the policy?
| 
| I mentioned Congress for two reasons. (i) It does monitoring. (ii)
| There was mention of compute, networking, and storage, and I
| couldn't tell if the idea was for policy that spans OS components or
| not. Congress was designed for policies spanning OS components.
| 
| 
| Tim
| 
| ----- Original Message -----
| 
| | From: "Jay Lau" < jay.lau.513 at gmail.com >
| | To: "OpenStack Development Mailing List (not for usage questions)"
| | < openstack-dev at lists.openstack.org >
| 
| 
| | Sent: Tuesday, February 25, 2014 10:13:14 PM
| | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
| | for OpenStack run time policy to manage
| | compute/storage resource
| | 
| | 
| | 
| | 
| | 
| | Thanks Sylvain and Tim for the great sharing.
| | 
| | @Tim, I also go through with Congress and have the same feeling
| | with
| | Sylvai, it is likely that Congress is doing something simliar with
| | Gantt providing a holistic way for deploying. What I want to do is
| | to provide some functions which is very similar with VMWare DRS
| | that
| | can do some adaptive scheduling automatically.
| | 
| | @Sylvain, can you please show more detail for what "Pets vs.
| | Cattles
| | analogy" means?
| | 
| | 
| | 
| | 
| | 2014-02-26 9:11 GMT+08:00 Sylvain Bauza < sylvain.bauza at gmail.com >
| | :
| | 
| | 
| | 
| | Hi Tim,
| | 
| | 
| | As per I'm reading your design document, it sounds more likely
| | related to something like Solver Scheduler subteam is trying to
| | focus on, ie. intelligent agnostic resources placement on an
| | holistic way [1]
| | IIRC, Jay is more likely talking about adaptive scheduling
| | decisions
| | based on feedback with potential counter-measures that can be done
| | for decreasing load and preserving QoS of nodes.
| | 
| | 
| | That said, maybe I'm wrong ?
| | 
| | 
| | [1] https://blueprints.launchpad.net/nova/+spec/solver-scheduler
| | 
| | 
| | 
| | 2014-02-26 1:09 GMT+01:00 Tim Hinrichs < thinrichs at vmware.com > :
| | 
| | 
| | 
| | 
| | Hi Jay,
| | 
| | The Congress project aims to handle something similar to your use
| | cases. I just sent a note to the ML with a Congress status update
| | with the tag [Congress]. It includes links to our design docs. Let
| | me know if you have trouble finding it or want to follow up.
| | 
| | Tim
| | 
| | 
| | 
| | ----- Original Message -----
| | | From: "Sylvain Bauza" < sylvain.bauza at gmail.com >
| | | To: "OpenStack Development Mailing List (not for usage
| | | questions)"
| | | < openstack-dev at lists.openstack.org >
| | | Sent: Tuesday, February 25, 2014 3:58:07 PM
| | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A
| | | proposal
| | | for OpenStack run time policy to manage
| | | compute/storage resource
| | | 
| | | 
| | | 
| | | Hi Jay,
| | | 
| | | 
| | | Currently, the Nova scheduler only acts upon user request (either
| | | live migration or boot an instance). IMHO, that's something Gantt
| | | should scope later on (or at least there could be some space
| | | within
| | | the Scheduler) so that Scheduler would be responsible for
| | | managing
| | | resources on a dynamic way.
| | | 
| | | 
| | | I'm thinking of the Pets vs. Cattles analogy, and I definitely
| | | think
| | | that Compute resources could be treated like Pets, provided the
| | | Scheduler does a move.
| | | 
| | | 
| | | -Sylvain
| | | 
| | | 
| | | 
| | | 2014-02-26 0:40 GMT+01:00 Jay Lau < jay.lau.513 at gmail.com > :
| | | 
| | | 
| | | 
| | | 
| | | Greetings,
| | | 
| | | 
| | | Here I want to bring up an old topic here and want to get some
| | | input
| | | from you experts.
| | | 
| | | 
| | | Currently in nova and cinder, we only have some initial placement
| | | polices to help customer deploy VM instance or create volume
| | | storage
| | | to a specified host, but after the VM or the volume was created,
| | | there was no policy to monitor the hypervisors or the storage
| | | servers to take some actions in the following case:
| | | 
| | | 
| | | 1) Load Balance Policy: If the load of one server is too heavy,
| | | then
| | | probably we need to migrate some VMs from high load servers to
| | | some
| | | idle servers automatically to make sure the system resource usage
| | | can be balanced.
| | | 
| | | 2) HA Policy: If one server get down for some hardware failure or
| | | whatever reasons, there is no policy to make sure the VMs can be
| | | evacuated or live migrated (Make sure migrate the VM before
| | | server
| | | goes down) to other available servers to make sure customer
| | | applications will not be affect too much.
| | | 
| | | 3) Energy Saving Policy: If a single host load is lower than
| | | configured threshold, then low down the frequency of the CPU to
| | | save
| | | energy; otherwise, increase the CPU frequency. If the average
| | | load
| | | is lower than configured threshold, then shutdown some
| | | hypervisors
| | | to save energy; otherwise, power on some hypervisors to load
| | | balance. Before power off a hypervisor host, the energy policy
| | | need
| | | to live migrate all VMs on the hypervisor to other available
| | | hypervisors; After Power on a hypervisor host, the Load Balance
| | | Policy will help live migrate some VMs to the new powered
| | | hypervisor.
| | | 
| | | 4) Customized Policy: Customer can also define some customized
| | | policies based on their specified requirement.
| | | 
| | | 5) Some run-time policies for block storage or even network.
| | | 
| | | 
| | | 
| | | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
| | | indeed many customers want such features.
| | | 
| | | 
| | | 
| | | I have filed a bp here [1] long ago, but after some discussion
| | | with
| | | Russell, we think that this should not belong to nova but other
| | | projects. Till now, I did not find a good place where we can put
| | | this in, can any of you show some comments?
| | | 
| | | 
| | | 
| | | [1]
| | | https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
| | | 
| | | --
| | | 
| | | 
| | | Thanks,
| | | 
| | | Jay
| | | 
| | | _______________________________________________
| | | OpenStack-dev mailing list
| | | OpenStack-dev at lists.openstack.org
| | | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| | | 
| | | 
| | | 
| | | _______________________________________________
| | | OpenStack-dev mailing list
| | | OpenStack-dev at lists.openstack.org
| | | https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=XDB3hT4WE2iDrNVK0sQ8qKooX2r1T4E%2BVHek3GREhnE%3D%0A&s=e2346cd017c9d8108c12a101892492e2ac75953e4a5ea5c17394c775cf086d7f
| | 
| | 
| | | 
| | 
| | _______________________________________________
| | OpenStack-dev mailing list
| | OpenStack-dev at lists.openstack.org
| | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| | 
| | 
| | _______________________________________________
| | OpenStack-dev mailing list
| | OpenStack-dev at lists.openstack.org
| | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| | 
| | 
| | 
| | 
| | --
| | 
| | 
| | Thanks,
| | 
| | Jay
| | 
| | _______________________________________________
| | OpenStack-dev mailing list
| | OpenStack-dev at lists.openstack.org
| | https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=E45fhmBClHHPExheGdRk0z%2Bj72gQAP4Zc1W3XElJx60%3D%0A&s=684cee6930f5d74f56e1ab9fc42e5f3c2511f07948f357040ca2dc175c4ccee6
| 
| 
| | 
| 
| _______________________________________________
| OpenStack-dev mailing list
| OpenStack-dev at lists.openstack.org
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| 
| 
| 
| --
| 
| 
| Thanks,
| 
| Jay
| 
| _______________________________________________
| OpenStack-dev mailing list
| OpenStack-dev at lists.openstack.org
| https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=oXtNDrtlvCX0d%2BN7rnJXx5e3YjkX%2FGQHVTRP%2BN7hKrw%3D%0A&s=40f6f376ab93e166f327385661db38cf88ca6f2563fbc003cd94d81bf596f9c4
| 



More information about the OpenStack-dev mailing list