<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 05/08/16 21:48, Ricardo Rocha wrote:<br>
    </div>
    <blockquote
cite="mid:CA+xPSVCFGBC=VHsrGWQkfmHpetgEE0Bdc=N1=wnGD5Oypf9N-g@mail.gmail.com"
      type="cite">
      <div dir="ltr">Hi.
        <div><br>
        </div>
        <div>Quick update is 1000 nodes and 7 million reqs/sec :) - and
          the number of requests should be higher but we had some
          internal issues. We have a submission for barcelona to provide
          a lot more details.
          <div><br>
          </div>
          <div>But a couple questions came during the exercise:</div>
        </div>
        <div><br>
        </div>
        <div>1. Do we really need a volume in the VMs? On large clusters
          this is a burden, and local storage only should be enough?</div>
        <div><br>
        </div>
        <div>2. We observe a significant delay (~10min, which is half
          the total time to deploy the cluster) on heat when it seems to
          be crunching the kube_minions nested stacks. Once it's done,
          it still adds new stacks gradually, so it doesn't look like it
          precomputed all the info in advance</div>
        <div><br>
        </div>
        <div>Anyone tried to scale Heat to stacks this size? We end up
          with a stack with:</div>
        <div>* 1000 nested stacks (depth 2)</div>
        <div>* 22000 resources</div>
        <div>* 47008 events</div>
        <div><br>
        </div>
        <div>And already changed most of the timeout/retrial values for
          rpc to get this working.</div>
        <div><br>
        </div>
        <div>This delay is already visible in clusters of 512 nodes, but
          40% of the time in 1000 nodes seems like something we could
          improve. Any hints on Heat configuration optimizations for
          large stacks very welcome.</div>
        <div><br>
        </div>
      </div>
    </blockquote>
    Yes, we recommend you set the following in /etc/heat/heat.conf
    [DEFAULT]:<br>
    max_resources_per_stack = -1<br>
    <br>
    Enforcing this for large stacks has a very high overhead, we make
    this change in the TripleO undercloud too.<br>
    <br>
    <blockquote
cite="mid:CA+xPSVCFGBC=VHsrGWQkfmHpetgEE0Bdc=N1=wnGD5Oypf9N-g@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>Cheers,</div>
        <div>  Ricardo</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Sun, Jun 19, 2016 at 10:59 PM, Brad
          Topol <span dir="ltr"><<a moz-do-not-send="true"
              href="mailto:btopol@us.ibm.com" target="_blank">btopol@us.ibm.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div>
              <p>Thanks Ricardo! This is very exciting progress!<br>
                <br>
                --Brad<br>
                <br>
                <br>
                Brad Topol, Ph.D.<br>
                IBM Distinguished Engineer<br>
                OpenStack<br>
                (919) 543-0646<br>
                Internet: <a moz-do-not-send="true"
                  href="mailto:btopol@us.ibm.com" target="_blank">btopol@us.ibm.com</a><br>
                Assistant: Kendra Witherspoon (919) 254-0680<br>
                <br>
                <img src="cid:part3.FCD0E5AB.088701E6@redhat.com"
                  alt="Inactive hide details for Ton Ngo---06/17/2016
                  12:10:33 PM---Thanks Ricardo for sharing the data,
                  this is really encouraging! T" border="0" height="16"
                  width="16"><font color="#424282">Ton Ngo---06/17/2016
                  12:10:33 PM---Thanks Ricardo for sharing the data,
                  this is really encouraging! Ton,</font><br>
                <br>
                <font color="#5F5F5F" size="2">From: </font><font
                  size="2">Ton Ngo/Watson/IBM@IBMUS</font><br>
                <font color="#5F5F5F" size="2">To: </font><font
                  size="2">"OpenStack Development Mailing List \(not for
                  usage questions\)" <<a moz-do-not-send="true"
                    href="mailto:openstack-dev@lists.openstack.org"
                    target="_blank">openstack-dev@lists.<wbr>openstack.org</a>></font><br>
                <font color="#5F5F5F" size="2">Date: </font><font
                  size="2">06/17/2016 12:10 PM</font><br>
                <font color="#5F5F5F" size="2">Subject: </font><font
                  size="2">Re: [openstack-dev] [magnum] 2 million
                  requests / sec, 100s of nodes</font></p>
              <div>
                <div class="h5"><br>
                  <hr style="color:#8091a5" align="left" size="2"
                    width="100%" noshade="noshade"><br>
                  <br>
                  <br>
                  <font size="4">Thanks Ricardo for sharing the data,
                    this is really encouraging!<br>
                    Ton,<br>
                    <br>
                  </font><img
                    src="cid:part3.FCD0E5AB.088701E6@redhat.com"
                    alt="Inactive hide details for Ricardo Rocha
                    ---06/17/2016 08:16:15 AM---Hi. Just thought the
                    Magnum team would be happy to hear :)" height="16"
                    width="16"><font color="#424282" size="4">Ricardo
                    Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought
                    the Magnum team would be happy to hear :)</font><font
                    size="4"><br>
                  </font><font color="#5F5F5F"><br>
                    From: </font>Ricardo Rocha <<a
                    moz-do-not-send="true"
                    href="mailto:rocha.porto@gmail.com" target="_blank">rocha.porto@gmail.com</a>><font
                    color="#5F5F5F"><br>
                    To: </font>"OpenStack Development Mailing List (not
                  for usage questions)" <<a moz-do-not-send="true"
                    href="mailto:openstack-dev@lists.openstack.org"
                    target="_blank">openstack-dev@lists.<wbr>openstack.org</a>><font
                    color="#5F5F5F"><br>
                    Date: </font>06/17/2016 08:16 AM<font
                    color="#5F5F5F"><br>
                    Subject: </font>[openstack-dev] [magnum] 2 million
                  requests / sec, 100s of nodes<font size="4"><br>
                  </font>
                  <hr align="left" size="2" width="100%"
                    noshade="noshade"><font size="4"><br>
                    <br>
                  </font><tt><font size="4"><br>
                      Hi.<br>
                      <br>
                      Just thought the Magnum team would be happy to
                      hear :)<br>
                      <br>
                      We had access to some hardware the last couple
                      days, and tried some<br>
                      tests with Magnum and Kubernetes - following an
                      original blog post<br>
                      from the kubernetes team.<br>
                      <br>
                      Got a 200 node kubernetes bay (800 cores) reaching
                      2 million requests / sec.<br>
                      <br>
                      Check here for some details:</font></tt><tt><u><font
                        color="#0000FF" size="4"><br>
                      </font></u></tt><a moz-do-not-send="true"
href="https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html"
                    target="_blank"><tt><u><font color="#0000FF"
                          size="4">https://openstack-in-<wbr>production.blogspot.ch/2016/<wbr>06/scaling-magnum-and-<wbr>kubernetes-2-million.html</font></u></tt></a><tt><font
                      size="4"><br>
                      <br>
                      We'll try bigger in a couple weeks, also using the
                      Rally work from<br>
                      Winnie, Ton and Spyros to see where it breaks.
                      Already identified a<br>
                      couple issues, will add bugs or push patches for
                      those. If you have<br>
                      ideas or suggestions for the next tests let us
                      know.<br>
                      <br>
                      Magnum is looking pretty good!<br>
                      <br>
                      Cheers,<br>
                      Ricardo<br>
                      <br>
                      ______________________________<wbr>______________________________<wbr>______________<br>
                      OpenStack Development Mailing List (not for usage
                      questions)<br>
                      Unsubscribe: <a moz-do-not-send="true"
href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe"
                        target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a></font></tt><tt><u><font
                        color="#0000FF" size="4"><br>
                      </font></u></tt><a moz-do-not-send="true"
                    href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"
                    target="_blank"><tt><u><font color="#0000FF"
                          size="4">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</font></u></tt></a><tt><font
                      size="4"><br>
                    </font></tt><font size="4"><br>
                    <br>
                    <br>
                  </font><tt>______________________________<wbr>______________________________<wbr>______________<br>
                    OpenStack Development Mailing List (not for usage
                    questions)<br>
                    Unsubscribe: <a moz-do-not-send="true"
href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe"
                      target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
                  </tt><tt><a moz-do-not-send="true"
                      href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"
                      target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a></tt><tt><br>
                  </tt><br>
                  <br>
                  <br>
                </div>
              </div>
            </div>
            <br>
            ______________________________<wbr>______________________________<wbr>______________<br>
            OpenStack Development Mailing List (not for usage questions)<br>
            Unsubscribe: <a moz-do-not-send="true"
href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe"
              rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
            <a moz-do-not-send="true"
              href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"
              rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
            <br>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>