<div dir="ltr">Again I can only speak for qpid, but it's not really a big load on the qpidd server itself. I think the issue is that the updates come in serially into each scheduler that you have running. We don't process those quickly enough for it to do any good, which is why the lookup from db. You can see this for yourself using the fake hypervisor, launch yourself a bunch of simulated nova-compute, launch a nova-scheduler on the same host and even with 1k or so you will notice the latency between the update being sent and the update actually meaning anything for the scheduler.<div>
<br></div><div style>I think a few points that have been brought up could mitigate this quite a bit. My personal view is the following:</div><div style><br></div><div style>-Only update when you have to (ie. 10k nodes all sending update every periodic interval is heavy, only send when you have to)</div>
<div style>-Don't fanout to schedulers, update a single scheduler which in turn updates a shared store that is fast such as memcache</div><div style><br></div><div style>I guess that effectively is what you are proposing with the added twist of the shared store.</div>
<div style><br></div><div style>-Mike</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jul 23, 2013 at 2:25 PM, Boris Pavlovic <span dir="ltr"><<a href="mailto:boris@pavlovic.me" target="_blank">boris@pavlovic.me</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Joe,<div>Sure we will.<br></div><div><br></div><div>Mike, </div><div>Thanks for sharing information about scalability problems, presentation was great. <br>
</div><div>Also could you say what do you think is 150 req/sec is it big load for qpid or rabbit? I think it is just nothing..</div><div class="im">
<div><br></div><div><br></div><div>Best regards,</div><div>Boris Pavlovic</div><div>---</div><div>Mirantis Inc.</div><div><br></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">
On Wed, Jul 24, 2013 at 12:17 AM, Joe Gordon <span dir="ltr"><<a href="mailto:joe.gordon0@gmail.com" target="_blank">joe.gordon0@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote"><div>On Tue, Jul 23, 2013 at 1:09 PM, Boris Pavlovic <span dir="ltr"><<a href="mailto:boris@pavlovic.me" target="_blank">boris@pavlovic.me</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Ian,<div><br></div><div>There are serious scalability and performance problems with DB usage in current scheduler.</div>
<div>Rapid Updates + Joins makes current solution absolutely not scalable. </div><div><br>
</div><div>Bleuhost example just shows personally for me just a trivial thing. (It just won't work)</div><div><br></div><div>We will add tomorrow antother graphic: </div><div>Avg user req / sec in current and our approaches. <br>
</div><div></div></div></blockquote><div><br></div></div><div>Will you be releasing your code to generate the results? Without that the graphic isn't very useful</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div>I hope it will help you to better understand situation. </div><div><br></div><div><br></div><div>Joshua,</div><div><br></div><div>Our current discussion is about could we remove information about compute nodes from Nova saftly.</div>
<div>Both our and your approach will remove data from nova DB. </div><div><br></div><div>Also your approach had much more:</div><div>1) network load</div><div>2) latency</div><div>3) one more service (memcached)</div><div>
<br></div><div>So I am not sure that it is better then just send directly to scheduler information. </div><div><br></div><div><br></div><div>Best regards,</div><div>Boris Pavlovic</div><div>---</div><div>Mirantis Inc. </div>
<div><br></div><div><br></div><div><br></div><div><br></div></div><div><div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jul 23, 2013 at 11:56 PM, Joe Gordon <span dir="ltr"><<a href="mailto:joe.gordon0@gmail.com" target="_blank">joe.gordon0@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><p dir="ltr"><br>
On Jul 23, 2013 3:44 PM, "Ian Wells" <<a href="mailto:ijw.ubuntu@cack.org.uk" target="_blank">ijw.ubuntu@cack.org.uk</a>> wrote:<br>
><br>
> > * periodic updates can overwhelm things. Solution: remove unneeded updates,<br>
> > most scheduling data only changes when an instance does some state change.<br>
><br>
> It's not clear that periodic updates do overwhelm things, though.<br>
> Boris ran the tests. Apparently 10k nodes updating once a minute<br>
> extend the read query by ~10% (the main problem being the read query<br>
> is abysmal in the first place). I don't know how much of the rest of<br>
> the infrastructure was involved in his test, though (RabbitMQ,<br>
> Conductor).</p>
</div><p dir="ltr">A great openstack at scale talk, that covers the scheduler <a href="http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111" target="_blank">http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111</a><br>
</p><div><div>
<p dir="ltr">><br>
> There are reasonably solid reasons why we would want an alternative to<br>
> the DB backend, but I'm not sure the update rate is one of them. If<br>
> we were going for an alternative the obvious candidate to my mind<br>
> would be something like ZooKeeper (particularly since in some setups<br>
> it's already a channel between the compute hosts and the control<br>
> server).<br>
> --<br>
> Ian.<br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</p>
</div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div></div></div><br></div></div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>