On Wed, Feb 10, 2021 at 1:05 PM Dan Smith <dms@danplanet.com> wrote:
> Here's the timing I see locally:
> Vanilla devstack: 775
> Client service alone: 529
> Parallel execution: 527
> Parallel client service: 465
>
> Most of the difference between the last two is shorter async_wait
> times because the deployment steps are taking less time. So not quite
> as much as before, but still a decent increase in speed.

Yeah, cool, I think you're right that we'll just serialize the
calls. It may not be worth the complexity, but if we make the OaaS
server able to do a few things in parallel, then we'll re-gain a little
more perf because we'll go back to overlapping the *server* side of
things. Creating flavors, volume types, networks and uploading the image
to glance are all things that should be doable in parallel in the server
projects.

465s for a devstack is awesome. Think of all the developer time in
$local_fiat_currency we could have saved if we did this four years
ago...  :)

--Dan


Hey folks,
Just wanted to check back in on the resource consumption topic.
Looking at my measurements the TripleO group has made quite a bit of progress keeping our enqued zuul time lower than our historical average.
Do you think we can measure where things stand now and have some new numbers available at the PTG?

/me notes we had a blip on 3/25 but there was a one off issue w/ nodepool in our gate.

Marios Andreou has put a lot of time into this, and others as well.  Kudo's Marios!
Thanks all!