[openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

Vladimir Kozhukalov vkozhukalov at mirantis.com
Mon Feb 8 16:58:50 UTC 2016


+1 to enable it ASAP.

It will also affect our deployment tests (~1 hour vs. ~2.5 hours).

Vladimir Kozhukalov

On Mon, Feb 8, 2016 at 7:35 PM, Bulat Gaifullin <bgaifullin at mirantis.com>
wrote:

> +1.
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> > On 08 Feb 2016, at 19:05, Igor Kalnitsky <ikalnitsky at mirantis.com>
> wrote:
> >
> > Hey Fuelers,
> >
> > When we are going to enable it? I think since HCF is passed for
> > stable/8.0, it's time to enable task-based deployment for master
> > branch.
> >
> > Opinion?
> >
> > - Igor
> >
> > On Wed, Feb 3, 2016 at 12:31 PM, Bogdan Dobrelya <bdobrelia at mirantis.com>
> wrote:
> >> On 02.02.2016 17:35, Alexey Shtokolov wrote:
> >>> Hi Fuelers!
> >>>
> >>> As you may be aware, since [0] Fuel has implemented a new orchestration
> >>> engine [1]
> >>> We switched the deployment paradigm from role-based (aka granular) to
> >>> task-based and now Fuel can deploy all nodes simultaneously using
> >>> cross-node dependencies between deployment tasks.
> >>
> >> That is great news! Please do not forget about docs updates as well.
> >> Those docs are always forgotten like poor orphans... I submitted a patch
> >> [0] to MOS docs, please review and add more details, if possible, for
> >> plugins impact as well.
> >>
> >> [0] https://review.fuel-infra.org/#/c/16509/
> >>
> >>>
> >>> This feature is experimental in Fuel 8.0 and will be enabled by default
> >>> for Fuel 9.0
> >>>
> >>> Allow me to show you the results. We made some benchmarks on our bare
> >>> metal lab [2]
> >>>
> >>> Case #1. 3 controllers + 7 computes w/ ceph.
> >>> Task-based deployment takes *~38* minutes vs *~1h15m* for granular
> (*~2*
> >>> times faster)
> >>> Here and below the deployment time is average time for 10 runs
> >>>
> >>> Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
> >>> Task-based deployment takes *~41* minutes vs *~1h32m* for granular
> >>> (*~2.24* times faster)
> >>>
> >>>
> >>>
> >>> Also we took measurements for Fuel CI test cases. Standard BVT (Master
> >>> node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs on one
> host)
> >>>
> >>> Fuel CI slaves with *4 *cores *~1.1* times faster
> >>> In case of 4 cores for 7 VMs they are fighting for CPU resources and it
> >>> marginalizes the gain of task-based deployment
> >>>
> >>> Fuel CI slaves with *6* cores *~1.6* times faster
> >>>
> >>> Fuel CI slaves with *12* cores *~1.7* times faster
> >>
> >> These are really outstanding results!
> >> (tl;dr)
> >> I believe the next step may be to leverage the "external install & svc
> >> management" feature (example [1]) of the Liberty release (7.0.0) of
> >> Puppet-Openstack (PO) modules. So we could use separate concurrent
> >> cross-depends based tasks *within a single node* as well, like:
> >> - task: install_all_packages - a singleton task for a node,
> >> - task: [configure_x, for each x] - concurrent for a node,
> >> - task: [manage_service_x, for each x] - some may be concurrent for a
> >> node, while another shall be serialized.
> >>
> >> So, one might use the "--tags" separator for concurrent puppet runs to
> >> make things go even faster, for example:
> >>
> >> # cat test.pp
> >> notify
> >> {"A": tag => "a" }
> >> notify
> >> {"B": tag => "b" }
> >>
> >> # puppet apply test.pp
> >> Notice: A
> >> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
> >> Notice: B
> >> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
> >>
> >> # puppet apply test.pp --tags a
> >> Notice: A
> >> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
> >>
> >> # puppet apply test.pp --tags a & puppet apply test.pp --tags b
> >> Notice: B
> >> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
> >> Notice: A
> >> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
> >>
> >> Which is supposed to be faster, although not for this example.
> >>
> >> [1] https://review.openstack.org/#/c/216926/3/manifests/init.pp
> >>
> >>>
> >>> You can see additional information and charts in the presentation [3].
> >>>
> >>> [0]
> >>> -
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082093.html
> >>> [1]
> >>> -
> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/task-based-deployment-mvp.html
> >>> [2] -  3 x HP ProLiant DL360p Gen8 (XeonE5 6 cores/64GB/SSD)  + 7 x HP
> >>> ProLiant DL320p Gen8 (XeonE3 4 cores/8-16GB/HDD)
> >>> [3] -
> >>>
> https://docs.google.com/presentation/d/1jZCFZlXHs_VhjtVYS2VuWgdxge5Q6sOMLz4bRLuw7YE
> >>>
> >>> ---
> >>> WBR, Alexey Shtokolov
> >>>
> >>>
> >>>
> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >> --
> >> Best regards,
> >> Bogdan Dobrelya,
> >> Irc #bogdando
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160208/0cef61fd/attachment.html>


More information about the OpenStack-dev mailing list