[openstack-dev] Scheduler proposal

Clint Byrum clint at fewbar.com
Sun Oct 11 07:23:27 UTC 2015

Excerpts from Boris Pavlovic's message of 2015-10-11 00:02:39 -0700:
> 2Everybody,
> Just curios why we need such complexity.
> Let's take a look from other side:
> 1) Information about all hosts (even in case of 100k hosts) will be less
> then 1 GB
> 2) Usually servers that runs scheduler service have at least 64GB RAM and
> more on the board
> 3) math.log(100000) < 12  (binary search per rule)
> 4) We have less then 20 rules for scheduling
> 5) Information about hosts is updated every 60 seconds (no updates host is
> dead)
> According to this information:
> 1) We can store everything in RAM of single server
> 2) We can use Python
> 3) Information about hosts is temporary data and shouldn't be stored in
> persistence storage
> Simplest architecture to cover this:
> 1) Single RPC service that has two methods: find_host(rules),
> update_host(host, data)
> 2) Store information about hosts  like a dict (host_name->data)
> 3) Create for each rule binary tree and update it on each host update
> 4) Make a algorithm that will use binary trees to find host based on rules
> 5) Each service like compute node, volume node, or neutron will send
> updates about host
>    that they managed (cross service scheduling)
> 6) Make a algorithm that will sync host stats in memory between different
> schedulers

I'm in, except I think this gets simpler with an intermediary service
like ZK/Consul to keep track of this 1GB of data and replace the need
for 6, and changes the implementation of 5 to "updates its record and
signals its presence".

What you've described is where I'd like to experiment, but I don't want
to reinvent ZK or Consul or etcd when they already exist and do such a
splendid job keeping observers informed of small changes in small data
sets. You still end up with the same in-memory performance, and this is
in line with some published white papers from Google around their use
of Chubby, which is their ZK/Consul.

More information about the OpenStack-dev mailing list