[TripleO] Scaling node counts with only Ansible (N=1)

James Slagle james.slagle at gmail.com
Fri Jul 12 16:43:13 UTC 2019


On Fri, Jul 12, 2019 at 9:46 AM David Peacock <dpeacock at redhat.com> wrote:
>
> Hi James,
>
> On Wed, Jul 10, 2019 at 4:20 PM James Slagle <james.slagle at gmail.com> wrote:
>>
>> There's been a fair amount of recent work around simplifying our Heat
>> templates and migrating the software configuration part of our
>> deployment entirely to Ansible.
>>
>> As part of this effort, it became apparent that we could render much
>> of the data that we need out of Heat in a way that is generic per
>> node, and then have Ansible render the node specific data during
>> config-download runtime.
>
>
> I find this endeavour very exciting.  Do you have any early indications of performance gains that you can share?

No hard numbers yet, but I can say that I can get to the Ansible stage
of the deployment with any number of nodes with an undercloud that
just meets the minimum requirements. This is significant because
previously we could not get to this stage without first deploying a
huge Heat stack which required a lot of physical resources, tuning,
tweaking, or going the undercloud minion route.

Also, it's less about performance and more about scale.

Certainly the Heat stack operation will be much faster as the number
of nodes in the deployment increases. The stack operation time will in
fact be constant in relation to the number of nodes in the deployment.
It will depend on the number of *roles*, but typically those are ~< 5
per deployment, and the most I've seen is 12.

The total work done by Ansible does increase as we move more logic
into roles and tasks. However, I expect the total Ansible run time to
be roughly equivalent to what we have today since the sum of all that
Ansible applies is roughly equal.

In terms of scale however, it allows us to move beyond the ~300 node
limit we're at today. And it keeps the Heat performance constant as
opposed to increasing with the node count.

-- 
-- James Slagle
--



More information about the openstack-discuss mailing list