Here's the timing I see locally: Vanilla devstack: 775 Client service alone: 529 Parallel execution: 527 Parallel client service: 465
Most of the difference between the last two is shorter async_wait times because the deployment steps are taking less time. So not quite as much as before, but still a decent increase in speed.
Yeah, cool, I think you're right that we'll just serialize the calls. It may not be worth the complexity, but if we make the OaaS server able to do a few things in parallel, then we'll re-gain a little more perf because we'll go back to overlapping the *server* side of things. Creating flavors, volume types, networks and uploading the image to glance are all things that should be doable in parallel in the server projects. 465s for a devstack is awesome. Think of all the developer time in $local_fiat_currency we could have saved if we did this four years ago... :) --Dan