[openstack-dev] A simple way to improve nova scheduler

Joe Gordon joe.gordon0 at gmail.com
Tue Jul 23 18:35:35 UTC 2013


On Jul 22, 2013 7:13 PM, "Joshua Harlow" <harlowja at yahoo-inc.com> wrote:
>
> An interesting idea, I'm not sure how useful it is but it could be.
>
> If you think of the compute node capability information as an 'event
stream' then you could imagine using something like apache flume (
http://flume.apache.org/) or storm (http://storm-project.net/) to be able
to sit on this stream and perform real-time analytics of said stream to
update how scheduling can be performed. Maybe the MQ or ceilometer can be
the same 'stream' source but it doesn't seem like it is needed to 'tie' the
impl to those methods. If you consider compute nodes as producers of said
data and then hook a real-time processing engine on-top that can adjust
some scheduling database used by a scheduler then it seems like u could
vary how often compute nodes produce said stream info, where and how said
stream info is stored and analyzed which will allow you to then adjust how
'real-time' you want said compute scheduling capability information to be
up to date.

Interesting idea, but not sure if its the right solution.  There are two
known issues today
* periodic updates can overwhelm things.  Solution: remove unneeded
updates, most scheduling data only changes when an instance does some state
change.
* according to Boris doing a get all hosts from the db doesn't scale.
Solution: there are several possibilities.

Neither scale issue today is helped with flume.  But this concept may be
useful in the future

>
> Just seems that real-time processing  is a similar model as what is
needed here.
>
> Maybe something like that is where this should end up?
>
> -Josh
>
> From: Joe Gordon <joe.gordon0 at gmail.com>
> Reply-To: OpenStack Development Mailing List <
openstack-dev at lists.openstack.org>
> Date: Monday, July 22, 2013 3:47 PM
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
>
> Subject: Re: [openstack-dev] A simple way to improve nova scheduler
>
>
>
>
> On Mon, Jul 22, 2013 at 5:16 AM, Boris Pavlovic <boris at pavlovic.me> wrote:
>>
>> Joe,
>>
>> >> Speaking of Chris Beherns  "Relying on anything but the DB for
current memory free, etc, is just too laggy… so we need to stick with it,
IMO."
http://lists.openstack.org/pipermail/openstack-dev/2013-June/010485.html
>>
>> It doesn't scale, use tons of resources, works slow and is hard to
extend.
>> Also the mechanism of getting free and used memory is done by virt
layer.
>> And only thing that could be laggy is rpc (but it is used also by
compute node update)
>
>
> You say it doesn't scale and uses tons of resources can you show to
reproduce your findings.  Also just because the current implementation of
the scheduler is non-optimal doesn't mean the no DB is the only solution, I
am interested in seeing other possible solutions before going down such a
drastically different road (no-db).  Such as pushing more of the logic into
the DB and not searching through all compute nodes in python space or
looking at removing the periodic updates all  together or ???.
>
>>
>>
>>
>> >> * How do you bring a new scheduler up in an existing deployment and
make it get the full state of the system?
>>
>> You should wait for a one periodic task time. And you will get full
information about all compute nodes.
>
>
> sure, that may work we need to add logic in to handle this.
>
>>
>> >> *  Broadcasting RPC updates from compute nodes to the scheduler means
every scheduler has to process  the same RPC message.  And if a deployment
hits the point where the number of compute updates is consuming 99 percent
of the scheduler's time just adding another scheduler won't fix anything as
it will get bombarded too.
>>
>>
>> If we are speaking about numbers. You are able to see our doc, where
they are counted.
>> If we have 10k nodes it will make only 150rpc calls/sec (which means
nothing for cpu). By the way we way we will remove 150 calls/s from
conductor. One more thing currently in 10nodes deployment I think we will
spend almost all time fro waiting DB (compute_nodes_get_all()). And also
when we are calling this method in this moment we should process all data
for 60 sec. (So in this case in numbers we are doing on scheduler side
60*request_pro_sec of our approach. Which means if we get more then 1
request pro sec we will do more CPU load.)
>
>
> There are deployments in production (bluehost) that are already bigger
then 10k nodes, AFAIK the last numbers I heard were 16k nodes and they
didn't use our scheduler at all. So a better upper limit would be something
like 30k nodes.  At that scale we get 500 RPC broadcasts per second
(assuming 60 second periodic update) from periodic updates, plus updates
from state changes.  If we assume only 1% of compute nodes have instances
that are changing state that is an additional 300 RPC broadcasts to the
schedulers per second.  So now we have 800 per second.  How many RPC
updates (from compute node to scheduler) per second can a single python
thread handle without DB access? With DB Access?
>
> As for your second point, I don't follow can you elaborate.
>
>
>
>
>>
>>
>>
>> >> Also OpenStack is already deeply invested in using the central DB
model for the state of the 'world' and while I am not against changing
that, I think we should evaluate that switch in a larger context.
>>
>> Step by step. As first step we could just remove compute_node_get_all
method. Which will make our openstack much scalable and fast.
>
>
> Yes, step by step is how to fix something.  But before going in this
direction it is worth a larger discussion of how we *want* things to look
and in what direction we should be moving in.  If we want to use this
model, we should consider where else it can help,  other repercussions etc.
>
>>
>>
>> By the way see one more time answers on your comments in doc.
>>
>> Best regards,
>> Boris Pavlovic
>>
>> Mirantis Inc.
>>
>>
>>
>>
>>
>> On Sat, Jul 20, 2013 at 3:14 AM, Joe Gordon <joe.gordon0 at gmail.com>
wrote:
>>>
>>>
>>>
>>>
>>> On Fri, Jul 19, 2013 at 3:13 PM, Sandy Walsh <sandy.walsh at rackspace.com>
wrote:
>>>>
>>>>
>>>>
>>>> On 07/19/2013 05:36 PM, Boris Pavlovic wrote:
>>>> > Sandy,
>>>> >
>>>> > I don't think that we have such problems here.
>>>> > Because scheduler doesn't pool compute_nodes.
>>>> > The situation is another compute_nodes notify scheduler about their
>>>> > state. (instead of updating their state in DB)
>>>> >
>>>> > So for example if scheduler send request to compute_node,
compute_node
>>>> > is able to run rpc call to schedulers immediately (not after 60sec).
>>>> >
>>>> > So there is almost no races.
>>>>
>>>> There are races that occur between the eventlet request threads. This
is
>>>> why the scheduler has been switched to single threaded and we can only
>>>> run one scheduler.
>>>>
>>>> This problem may have been eliminated with the work that Chris Behrens
>>>> and Brian Elliott were doing, but I'm not sure.
>>>
>>>
>>>
>>> Speaking of Chris Beherns  "Relying on anything but the DB for current
memory free, etc, is just too laggy… so we need to stick with it, IMO."
http://lists.openstack.org/pipermail/openstack-dev/2013-June/010485.html
>>>
>>> Although there is some elegance to the proposal here I have some
concerns.
>>>
>>> If just using RPC broadcasts from compute to schedulers to keep track
of things, we get two issues:
>>>
>>> * How do you bring a new scheduler up in an existing deployment and
make it get the full state of the system?
>>> * Broadcasting RPC updates from compute nodes to the scheduler means
every scheduler has to process  the same RPC message.  And if a deployment
hits the point where the number of compute updates is consuming 99 percent
of the scheduler's time just adding another scheduler won't fix anything as
it will get bombarded too.
>>>
>>> Also OpenStack is already deeply invested in using the central DB model
for the state of the 'world' and while I am not against changing that, I
think we should evaluate that switch in a larger context.
>>>
>>>
>>>>
>>>>
>>>> But certainly, the old approach of having the compute node broadcast
>>>> status every N seconds is not suitable and was eliminated a long time
ago.
>>>>
>>>> >
>>>> >
>>>> > Best regards,
>>>> > Boris Pavlovic
>>>> >
>>>> > Mirantis Inc.
>>>> >
>>>> >
>>>> >
>>>> > On Sat, Jul 20, 2013 at 12:23 AM, Sandy Walsh <
sandy.walsh at rackspace.com
>>>> > <mailto:sandy.walsh at rackspace.com>> wrote:
>>>> >
>>>> >
>>>> >
>>>> >     On 07/19/2013 05:01 PM, Boris Pavlovic wrote:
>>>> >     > Sandy,
>>>> >     >
>>>> >     > Hm I don't know that algorithm. But our approach doesn't have
>>>> >     > exponential exchange.
>>>> >     > I don't think that in 10k nodes cloud we will have a problems
with 150
>>>> >     > RPC call/sec. Even in 100k we will have only 1.5k RPC call/sec.
>>>> >     > More then (compute nodes update their state in DB through
conductor
>>>> >     > which produce the same count of RPC calls).
>>>> >     >
>>>> >     > So I don't see any explosion here.
>>>> >
>>>> >     Sorry, I was commenting on Soren's suggestion from way back
(essentially
>>>> >     listening on a separate exchange for each unique flavor ... so no
>>>> >     scheduler was needed at all). It was a great idea, but fell
apart rather
>>>> >     quickly.
>>>> >
>>>> >     The existing approach the scheduler takes is expensive (asking
the db
>>>> >     for state of all hosts) and polling the compute nodes might be
do-able,
>>>> >     but you're still going to have latency problems waiting for the
>>>> >     responses (the states are invalid nearly immediately, especially
if a
>>>> >     fill-first scheduling algorithm is used). We ran into this
problem
>>>> >     before in an earlier scheduler implementation. The
round-tripping kills.
>>>> >
>>>> >     We have a lot of really great information on Host state in the
form of
>>>> >     notifications right now. I think having a service (or
notification
>>>> >     driver) listening for these and keeping an the HostState
incrementally
>>>> >     updated (and reported back to all of the schedulers via the
fanout
>>>> >     queue) would be a better approach.
>>>> >
>>>> >     -S
>>>> >
>>>> >
>>>> >     >
>>>> >     > Best regards,
>>>> >     > Boris Pavlovic
>>>> >     >
>>>> >     > Mirantis Inc.
>>>> >     >
>>>> >     >
>>>> >     > On Fri, Jul 19, 2013 at 11:47 PM, Sandy Walsh
>>>> >     <sandy.walsh at rackspace.com <mailto:sandy.walsh at rackspace.com>
>>>> >     > <mailto:sandy.walsh at rackspace.com
>>>> >     <mailto:sandy.walsh at rackspace.com>>> wrote:
>>>> >     >
>>>> >     >
>>>> >     >
>>>> >     >     On 07/19/2013 04:25 PM, Brian Schott wrote:
>>>> >     >     > I think Soren suggested this way back in Cactus to use MQ
>>>> >     for compute
>>>> >     >     > node state rather than database and it was a good idea
then.
>>>> >     >
>>>> >     >     The problem with that approach was the number of queues
went
>>>> >     exponential
>>>> >     >     as soon as you went beyond simple flavors. Add
Capabilities or
>>>> >     other
>>>> >     >     criteria and you get an explosion of exchanges to listen
to.
>>>> >     >
>>>> >     >
>>>> >     >
>>>> >     >     > On Jul 19, 2013, at 10:52 AM, Boris Pavlovic
>>>> >     <boris at pavlovic.me <mailto:boris at pavlovic.me>
>>>> >     >     <mailto:boris at pavlovic.me <mailto:boris at pavlovic.me>>
>>>> >     >     > <mailto:boris at pavlovic.me <mailto:boris at pavlovic.me>
>>>> >     <mailto:boris at pavlovic.me <mailto:boris at pavlovic.me>>>> wrote:
>>>> >     >     >
>>>> >     >     >> Hi all,
>>>> >     >     >>
>>>> >     >     >>
>>>> >     >     >> In Mirantis Alexey Ovtchinnikov and me are working on
nova
>>>> >     scheduler
>>>> >     >     >> improvements.
>>>> >     >     >>
>>>> >     >     >> As far as we can see the problem, now scheduler has two
>>>> >     major issues:
>>>> >     >     >>
>>>> >     >     >> 1) Scalability. Factors that contribute to bad
scalability
>>>> >     are these:
>>>> >     >     >> *) Each compute node every periodic task interval (60
sec
>>>> >     by default)
>>>> >     >     >> updates resources state in DB.
>>>> >     >     >> *) On every boot request scheduler has to fetch
information
>>>> >     about all
>>>> >     >     >> compute nodes from DB.
>>>> >     >     >>
>>>> >     >     >> 2) Flexibility. Flexibility perishes due to problems
with:
>>>> >     >     >> *) Addiing new complex resources (such as big lists of
complex
>>>> >     >     objects
>>>> >     >     >> e.g. required by PCI Passthrough
>>>> >     >     >>
>>>> >     >
>>>> >
https://review.openstack.org/#/c/34644/5/nova/db/sqlalchemy/models.py)
>>>> >     >     >> *) Using different sources of data in Scheduler for
example
>>>> >     from
>>>> >     >     >> cinder or ceilometer.
>>>> >     >     >> (as required by Volume Affinity Filter
>>>> >     >     >> https://review.openstack.org/#/c/29343/)
>>>> >     >     >>
>>>> >     >     >>
>>>> >     >     >> We found a simple way to mitigate this issues by
avoiding
>>>> >     of DB usage
>>>> >     >     >> for host state storage.
>>>> >     >     >>
>>>> >     >     >> A more detailed discussion of the problem state and one
of
>>>> >     a possible
>>>> >     >     >> solution can be found here:
>>>> >     >     >>
>>>> >     >     >>
>>>> >     >
>>>> >
https://docs.google.com/document/d/1_DRv7it_mwalEZzLy5WO92TJcummpmWL4NWsWf0UWiQ/edit#
>>>> >     >     >>
>>>> >     >     >>
>>>> >     >     >> Best regards,
>>>> >     >     >> Boris Pavlovic
>>>> >     >     >>
>>>> >     >     >> Mirantis Inc.
>>>> >     >     >>
>>>> >     >     >> _______________________________________________
>>>> >     >     >> OpenStack-dev mailing list
>>>> >     >     >> OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>
>>>> >     >     <mailto:OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>>
>>>> >     >     >> <mailto:OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>
>>>> >     >     <mailto:OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>>>
>>>> >     >     >>
>>>> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >     >     >
>>>> >     >     >
>>>> >     >     >
>>>> >     >     > _______________________________________________
>>>> >     >     > OpenStack-dev mailing list
>>>> >     >     > OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>
>>>> >     >     <mailto:OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>>
>>>> >     >     >
>>>> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >     >     >
>>>> >     >
>>>> >     >     _______________________________________________
>>>> >     >     OpenStack-dev mailing list
>>>> >     >     OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>
>>>> >     >     <mailto:OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>>
>>>> >     >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >     >
>>>> >     >
>>>> >     >
>>>> >     >
>>>> >     > _______________________________________________
>>>> >     > OpenStack-dev mailing list
>>>> >     > OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>
>>>> >     >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >     >
>>>> >
>>>> >     _______________________________________________
>>>> >     OpenStack-dev mailing list
>>>> >     OpenStack-dev at lists.openstack.org
>>>> >     <mailto:OpenStack-dev at lists.openstack.org>
>>>> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > OpenStack-dev mailing list
>>>> > OpenStack-dev at lists.openstack.org
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130723/d8928e04/attachment.html>


More information about the OpenStack-dev mailing list