[openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

Steve Baker sbaker at redhat.com
Sun Aug 7 22:11:29 UTC 2016


On 05/08/16 21:48, Ricardo Rocha wrote:
> Hi.
>
> Quick update is 1000 nodes and 7 million reqs/sec :) - and the number 
> of requests should be higher but we had some internal issues. We have 
> a submission for barcelona to provide a lot more details.
>
> But a couple questions came during the exercise:
>
> 1. Do we really need a volume in the VMs? On large clusters this is a 
> burden, and local storage only should be enough?
>
> 2. We observe a significant delay (~10min, which is half the total 
> time to deploy the cluster) on heat when it seems to be crunching the 
> kube_minions nested stacks. Once it's done, it still adds new stacks 
> gradually, so it doesn't look like it precomputed all the info in advance
>
> Anyone tried to scale Heat to stacks this size? We end up with a stack 
> with:
> * 1000 nested stacks (depth 2)
> * 22000 resources
> * 47008 events
>
> And already changed most of the timeout/retrial values for rpc to get 
> this working.
>
> This delay is already visible in clusters of 512 nodes, but 40% of the 
> time in 1000 nodes seems like something we could improve. Any hints on 
> Heat configuration optimizations for large stacks very welcome.
>
Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
max_resources_per_stack = -1

Enforcing this for large stacks has a very high overhead, we make this 
change in the TripleO undercloud too.

> Cheers,
>   Ricardo
>
> On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <btopol at us.ibm.com 
> <mailto:btopol at us.ibm.com>> wrote:
>
>     Thanks Ricardo! This is very exciting progress!
>
>     --Brad
>
>
>     Brad Topol, Ph.D.
>     IBM Distinguished Engineer
>     OpenStack
>     (919) 543-0646
>     Internet: btopol at us.ibm.com <mailto:btopol at us.ibm.com>
>     Assistant: Kendra Witherspoon (919) 254-0680
>
>     Inactive hide details for Ton Ngo---06/17/2016 12:10:33
>     PM---Thanks Ricardo for sharing the data, this is really
>     encouraging! TTon Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo
>     for sharing the data, this is really encouraging! Ton,
>
>     From: Ton Ngo/Watson/IBM at IBMUS
>     To: "OpenStack Development Mailing List \(not for usage
>     questions\)" <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>
>     Date: 06/17/2016 12:10 PM
>     Subject: Re: [openstack-dev] [magnum] 2 million requests / sec,
>     100s of nodes
>
>
>     ------------------------------------------------------------------------
>
>
>
>     Thanks Ricardo for sharing the data, this is really encouraging!
>     Ton,
>
>     Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
>     AM---Hi. Just thought the Magnum team would be happy to hear
>     :)Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the
>     Magnum team would be happy to hear :)
>
>     From: Ricardo Rocha <rocha.porto at gmail.com
>     <mailto:rocha.porto at gmail.com>>
>     To: "OpenStack Development Mailing List (not for usage questions)"
>     <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>
>     Date: 06/17/2016 08:16 AM
>     Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s
>     of nodes
>     ------------------------------------------------------------------------
>
>
>
>     Hi.
>
>     Just thought the Magnum team would be happy to hear :)
>
>     We had access to some hardware the last couple days, and tried some
>     tests with Magnum and Kubernetes - following an original blog post
>     from the kubernetes team.
>
>     Got a 200 node kubernetes bay (800 cores) reaching 2 million
>     requests / sec.
>
>     Check here for some details:_
>     __https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html_
>     <https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html>
>
>     We'll try bigger in a couple weeks, also using the Rally work from
>     Winnie, Ton and Spyros to see where it breaks. Already identified a
>     couple issues, will add bugs or push patches for those. If you have
>     ideas or suggestions for the next tests let us know.
>
>     Magnum is looking pretty good!
>
>     Cheers,
>     Ricardo
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>_
>     __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_
>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160808/bde98d5c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160808/bde98d5c/attachment.gif>


More information about the OpenStack-dev mailing list