[Openstack-operators] Openstack team size vs's deployment size

Fox, Kevin M Kevin.Fox at pnnl.gov
Mon Sep 12 21:45:25 UTC 2016

I'd also add it depends on feature set of the cloud. If you have extra services, or your users keep asking for more and more openstack features to be added to the cloud (dnsaas, dbaas, hadoopaas, coeaas,) then the ratio is much higher then say, with just a basic cloud with vmaas & naas.

From: Clint Byrum [clint at fewbar.com]
Sent: Monday, September 12, 2016 2:26 PM
To: openstack-operators
Subject: Re: [Openstack-operators] Openstack team size vs's deployment size

Excerpts from gustavo panizzo (gfa)'s message of 2016-09-09 22:07:49 +0800:
> On Thu, Sep 08, 2016 at 03:52:42PM +0000, Kris G. Lindgren wrote:
> > I completely agree about the general rule of thumb.  I am only looking at the team that specifically supports openstack.  For us frontend support for public clouds is handled by another team/org all together.
> in my previous job the ratio was 1 openstack guy / 300 prod hv and
> ~ 50 non prod hypervisors (non prod clouds).
> we had 5 different clouds, 2 biggest clouds shared keystone and
> glance (same dc, different buildings, almost lan latency). the biggest cloud had 2 regions
> (different connectivity on same dc building)
> a different team took care of the underlying hw, live migrations (when
> necessary but usually escalated to the openstack team) and install the
> computes running a single salt stack command. another team developed a
> in-house horizon replacement
> that job definitively burned me, i'd say that the sweet spot is about
> 1 person every 200 hv, but if your regions are very similar and you have
> control of the whole stack (i didn't) i think 1 person every 300 hv is
> doable
> we only used nova, neutron (provider networks), glance and keystone.

This ratio is entirely dependent on the point of the cloud, and where
one's priorities lie.

If you have a moderately sized private cloud (< 200 hv's) with no AZ's,
and 1 region, and an uptime SLA of 99.99%, I'd expect 1 FTE would be
plenty. But larger clouds that expect to continuously grow should try to
drive it down as low as possible so that value can be extracted from the
equipment in as small a batch size as possible. The higher that ratio,
the more value we're getting from the servers themselves.

Basically, I'd expect this to scale logarithmically, with clouds in the
1 - 500 hv range being similar in total cost, but everything above that
leveling off in total cost growth, but continuing a more or less linear
path up in value proposition. The only way we get there is to attack
complexity with a vengeance.

OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org

More information about the OpenStack-operators mailing list