[openstack-dev] A simple way to improve nova scheduler

Boris Pavlovic boris at pavlovic.me
Wed Jul 31 11:00:15 UTC 2013


Hi Shane,

Thanks for implementing this one new approach.
Yes, I agree that it solves problems with "JOIN".

But now I am worry about new problem db.compute_node_update() that changes
every time field with "TEXT" type which means that this should work really
slow.

So I have some question about testing time, did you test just joins or
joins with parallel N/60 updates/sec of compute_node_update() calls?

Also we will need Russell confirmation to merge such a big change right
before release.
Russell what do you think?

>From what I know since we don't have a clear solution for this issue
community agreed that it would be discussed on the coming summit.


Best regards,
Boris Pavlovic
--
Mirantis Inc.




On Wed, Jul 31, 2013 at 9:36 AM, Wang, Shane <shane.wang at intel.com> wrote:

>   Hi,****
>
> ** **
>
> I have a patchset ready for your review
> https://review.openstack.org/#/c/38802/****
>
> This patchset is to remove table compute_node_stats and add one more
> column “stats” in table compute_nodes as JSON dict. With that,
> compute_node_get_all() doesn’t need to join another table when nova
> schedulers call it.****
>
> ** **
>
> My team has done some preliminary tests. The performance could be reduced
> to ~1.32 seconds from ~16.89 seconds, where we suppose there are 10K
> compute nodes and each node has 20 stats records in compute_node_stats.***
> *
>
> ** **
>
> Thank you for your review, and what do you think?****
>
> ** **
>
> Thanks.****
>
> --****
>
> Shane****
>
> *From:* Joshua Harlow [mailto:harlowja at yahoo-inc.com]
> *Sent:* Thursday, July 25, 2013 5:36 AM
> *To:* OpenStack Development Mailing List; Boris Pavlovic
>
> *Subject:* Re: [openstack-dev] A simple way to improve nova scheduler****
>
>  ** **
>
> As far as the send only when you have to. That reminds me of this piece of
> work that could be resurrected that slowed down the periodic updates when
> nothing was changing.****
>
> ** **
>
> https://review.openstack.org/#/c/26291/****
>
> ** **
>
> Could be brought back, the concept still feels useful imho. But maybe not
> to others :-P****
>
> ** **
>
> *From: *Boris Pavlovic <boris at pavlovic.me>
> *Reply-To: *OpenStack Development Mailing List <
> openstack-dev at lists.openstack.org>
> *Date: *Wednesday, July 24, 2013 12:12 PM
> *To: *OpenStack Development Mailing List <
> openstack-dev at lists.openstack.org>
> *Subject: *Re: [openstack-dev] A simple way to improve nova scheduler****
>
> ** **
>
> Hi Mike,****
>
> ** **
>
> On Wed, Jul 24, 2013 at 1:01 AM, Mike Wilson <geekinutah at gmail.com> wrote:
> ****
>
> Again I can only speak for qpid, but it's not really a big load on the
> qpidd server itself. I think the issue is that the updates come in serially
> into each scheduler that you have running. We don't process those quickly
> enough for it to do any good, which is why the lookup from db. You can see
> this for yourself using the fake hypervisor, launch yourself a bunch of
> simulated nova-compute, launch a nova-scheduler on the same host and even
> with 1k or so you will notice the latency between the update being sent and
> the update actually meaning anything for the scheduler. ****
>
> ** **
>
> I think a few points that have been brought up could mitigate this quite a
> bit. My personal view is the following:****
>
> ** **
>
> -Only update when you have to (ie. 10k nodes all sending update every
> periodic interval is heavy, only send when you have to)****
>
> -Don't fanout to schedulers, update a single scheduler which in turn
> updates a shared store that is fast such as memcache****
>
> ** **
>
> I guess that effectively is what you are proposing with the added twist of
> the shared store.****
>
> ** **
>
> ** **
>
> Absolutely agree with this. Especially with using memcached (or redis) as
> common storage for all schedulers. ****
>
> ** **
>
> Best regards,****
>
> Boris Pavlovic****
>
> ---****
>
> Mirantis Inc. ****
>
> ** **
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130731/eeac7c5b/attachment.html>


More information about the OpenStack-dev mailing list