[openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

Roman Vasilets rvasilets at mirantis.com
Sun Aug 7 20:42:00 UTC 2016


HI,
  Great to hear it! From the view of Rally team=)

-Best regards, Roman Vasylets

On Sun, Aug 7, 2016 at 10:55 PM, Ricardo Rocha <rocha.porto at gmail.com>
wrote:

> Hi Ton.
>
> I think we should. Also in cases where multiple volume types are available
> (in our case with different iops) there would be additional parameters
> required to select the volume type. I'll add it this week.
>
> It's a detail though, spawning container clusters with Magnum is now super
> easy (and fast!).
>
> Cheers,
>   Ricardo
>
> On Fri, Aug 5, 2016 at 5:11 PM, Ton Ngo <ton at us.ibm.com> wrote:
>
>> Hi Ricardo,
>> For your question 1, you can modify the Heat template to not create the
>> Cinder volume and tweak the call to
>> configure-docker-storage.sh to use local storage. It should be fairly
>> straightforward. You just need to make
>> sure the local storage of the flavor is sufficient to host the containers
>> in the benchmark.
>> If you think this is a common scenario, we can open a blueprint for this
>> option.
>> Ton,
>>
>> [image: Inactive hide details for Ricardo Rocha ---08/05/2016 04:51:55
>> AM---Hi. Quick update is 1000 nodes and 7 million reqs/sec :) -]Ricardo
>> Rocha ---08/05/2016 04:51:55 AM---Hi. Quick update is 1000 nodes and 7
>> million reqs/sec :) - and the number of
>>
>> From: Ricardo Rocha <rocha.porto at gmail.com>
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org>
>> Date: 08/05/2016 04:51 AM
>>
>> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
>> nodes
>> ------------------------------
>>
>>
>>
>> Hi.
>>
>> Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
>> requests should be higher but we had some internal issues. We have a
>> submission for barcelona to provide a lot more details.
>>
>> But a couple questions came during the exercise:
>>
>> 1. Do we really need a volume in the VMs? On large clusters this is a
>> burden, and local storage only should be enough?
>>
>> 2. We observe a significant delay (~10min, which is half the total time
>> to deploy the cluster) on heat when it seems to be crunching the
>> kube_minions nested stacks. Once it's done, it still adds new stacks
>> gradually, so it doesn't look like it precomputed all the info in advance
>>
>> Anyone tried to scale Heat to stacks this size? We end up with a stack
>> with:
>> * 1000 nested stacks (depth 2)
>> * 22000 resources
>> * 47008 events
>>
>> And already changed most of the timeout/retrial values for rpc to get
>> this working.
>>
>> This delay is already visible in clusters of 512 nodes, but 40% of the
>> time in 1000 nodes seems like something we could improve. Any hints on Heat
>> configuration optimizations for large stacks very welcome.
>>
>> Cheers,
>>   Ricardo
>>
>> On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <*btopol at us.ibm.com*
>> <btopol at us.ibm.com>> wrote:
>>
>>    Thanks Ricardo! This is very exciting progress!
>>
>>    --Brad
>>
>>
>>    Brad Topol, Ph.D.
>>    IBM Distinguished Engineer
>>    OpenStack
>>    (919) 543-0646
>>    Internet: *btopol at us.ibm.com* <btopol at us.ibm.com>
>>    Assistant: Kendra Witherspoon (919) 254-0680
>>
>>    [image: Inactive hide details for Ton Ngo---06/17/2016 12:10:33
>>    PM---Thanks Ricardo for sharing the data, this is really encouraging! T]Ton
>>    Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this is
>>    really encouraging! Ton,
>>
>>    From: Ton Ngo/Watson/IBM at IBMUS
>>    To: "OpenStack Development Mailing List \(not for usage questions\)" <
>>    *openstack-dev at lists.openstack.org*
>>    <openstack-dev at lists.openstack.org>>
>>    Date: 06/17/2016 12:10 PM
>>    Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s
>>    of nodes
>>
>>
>>    ------------------------------
>>
>>
>>
>>    Thanks Ricardo for sharing the data, this is really encouraging!
>>    Ton,
>>
>>    [image: Inactive hide details for Ricardo Rocha ---06/17/2016
>>    08:16:15 AM---Hi. Just thought the Magnum team would be happy to hear :)]Ricardo
>>    Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be
>>    happy to hear :)
>>
>>    From: Ricardo Rocha <*rocha.porto at gmail.com* <rocha.porto at gmail.com>>
>>    To: "OpenStack Development Mailing List (not for usage questions)" <
>>    *openstack-dev at lists.openstack.org*
>>    <openstack-dev at lists.openstack.org>>
>>    Date: 06/17/2016 08:16 AM
>>    Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
>>    nodes
>>    ------------------------------
>>
>>
>>
>>    Hi.
>>
>>    Just thought the Magnum team would be happy to hear :)
>>
>>    We had access to some hardware the last couple days, and tried some
>>    tests with Magnum and Kubernetes - following an original blog post
>>    from the kubernetes team.
>>
>>    Got a 200 node kubernetes bay (800 cores) reaching 2 million requests
>>    / sec.
>>
>>    Check here for some details:
>>
>>    *https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html*
>>    <https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html>
>>
>>    We'll try bigger in a couple weeks, also using the Rally work from
>>    Winnie, Ton and Spyros to see where it breaks. Already identified a
>>    couple issues, will add bugs or push patches for those. If you have
>>    ideas or suggestions for the next tests let us know.
>>
>>    Magnum is looking pretty good!
>>
>>    Cheers,
>>    Ricardo
>>
>>    ____________________________________________________________
>>    ______________
>>    OpenStack Development Mailing List (not for usage questions)
>>    Unsubscribe:
>>    *OpenStack-dev-request at lists.openstack.org?subject:unsubscribe*
>>    <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>>    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>>    ____________________________________________________________
>>    ______________
>>    OpenStack Development Mailing List (not for usage questions)
>>    Unsubscribe:
>>    *OpenStack-dev-request at lists.openstack.org?subject:unsubscribe*
>>    <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>>    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>>
>>    ____________________________________________________________
>>    ______________
>>    OpenStack Development Mailing List (not for usage questions)
>>    Unsubscribe:
>>    *OpenStack-dev-request at lists.openstack.org?subject:unsubscribe*
>>    <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>>    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160807/14decb05/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160807/14decb05/attachment.gif>


More information about the OpenStack-dev mailing list