[Openstack-operators] Question for deployers: how much variation is there in hardware in your deployments
Jonathan Proulx
jon at jonproulx.com
Sun Dec 1 20:52:10 UTC 2013
At MIT/CSAIL http://tig.csail.mit.edu/wiki/TIG/OpenStack
Similar to CERN each purchase we make is different. We're expanding
faster than we are aging nodes out. Each generation is bigger and
faster since we look for the current best price/value in current
offerings. In the configuration side this doesn't have much practical
impact (more cores and/or faster cores, more memory) though it may
mean different network drivers and such like. It all most always
means larger drives which impacts partitioning.
We currently use http://fai-project.org/ to do the initial PXE based
partition and install using some min/max and percentage based
partitioning which then hooks into our puppet world to do service
deploy and configuration work. This is used across all our systems
both desk top and server room, so we're managing hundreds of different
hardware configs and hundreds of different system roles. On the
openstack side we only have a couple of hardware types, I'd expect it
to be about 6-10 over the life time of a system here, mostly through
organic difference.
I'll say hear we're very pleased with our current tools and recently
reevaluated FAI (which we've been using for >10years) against more
recent additions like razor, cobbler, maas etc. I had this done by a
new hire who wasn't familiar with our old way and the verdict was
soundly in favour of FAI. Since we use this daily for the rest of our
infrastructure it's unlikely we would choose to use anything else to
manage OpenStack (or any other subset of our infrastructure).
In the short term we will likely have some difference by design
deploying a small number of more highly redundant servers to hold
legacy applications. The goal would be to rearchitect these so they
are running on multiple instances in a cloudy fault tolerant way, but
reality is we need to move them sooner and don't want to grow the
legacy virtualization environment.
We are also exposing two different hypervisor configs to users using
host aggregates and instance types. The default set is densely packed
(still investigating best cpu_ratio, currently at 4:1 though we tend
to block on memory availability (at 1.5:1) before we hit that), a
second set of hypervisors with cpu_ration 1:1 is also provided to
select projects with demonstrated need. This configuration is largely
independent of the hardware diversity.
There has been talk of providing baremetal for some research groups
that need physical cpu register access as well as for I/O intensive
use in the "BigData" space and possibly for a GPU use case, but these
are potential rather than actual at this point. If they are deployed
these would create more hardware and configuration diversity.
-Jon
On Sun, Dec 1, 2013 at 2:16 PM, Robert Collins
<robertc at robertcollins.net> wrote:
> This is for input to the TripleO design prioritisation discussions
> we're having at the moment.
>
> We have been assuming that everyone has almost no variation on
> hypervisors, but specialised hardware configuration for control plane
> / data storage / compute - so typically 3 profiles of machine.
>
> But, we'd like a little data on that.
>
> So - if you're willing, please reply (here or directly to me) with
> answers to the following q's:
>
> - how many hardware configurations do you have
> - are they deliberate, or organic (e.g. optimising for work vs your
> vendor couldn't deliver the same config anymore)
> - if you have multiple hypervisor hardware configurations, do you
> expose that to your users (e.g. using different flavors to let users
> choose which servers to run vms on) - a story for that might be high
> performance computing vs densely packed vms.
>
> Thanks!
> -Rob
>
> --
> Robert Collins <rbtcollins at hp.com>
> Distinguished Technologist
> HP Converged Cloud
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
More information about the OpenStack-operators
mailing list