[Openstack-operators] Question for deployers: how much variation is there in hardware in your deployments
Jay Pipes
jaypipes at gmail.com
Mon Dec 2 15:05:35 UTC 2013
On 12/01/2013 02:16 PM, Robert Collins wrote:
> This is for input to the TripleO design prioritisation discussions
> we're having at the moment.
>
> We have been assuming that everyone has almost no variation on
> hypervisors, but specialised hardware configuration for control plane
> / data storage / compute - so typically 3 profiles of machine.
>
> But, we'd like a little data on that.
>
> So - if you're willing, please reply (here or directly to me) with
> answers to the following q's:
>
> - how many hardware configurations do you have
5 "profiles":
* "high support" nodes (used for database cluster nodes, message queue
cluster nodes, some other intensive ops stuff. Generally these have
battery-backed write caches, 128+ GB memory, and a fair bit of attached
disk storage, in a RAID 10 or RAID5 setup.
* "low support" nodes -- used for less intensive ops activities like
LDAP slaves (for host access control, not cloud access control),
graphing, logging, Quantum L3 router nodes, jump hosts, Chef servers,
etc. Less memory than high support, but varied attached disk storage.
* "controller" nodes -- used for all stateless OpenStack API services.
32-64 GB memory, little attached disk, 1 processor, varied number of
cores 6-12.
* "compute worker" -- used for Nova compute workers. Typically 256 or
144GB memory, 2 processors, 24 cores. 4-6TB direct attached storage,
typically in a RAID 5 configuration.
* "object storage" -- used for Swift proxy, object and some support
services.
> - are they deliberate, or organic (e.g. optimising for work vs your
> vendor couldn't deliver the same config anymore)
LOL. Mix of both? :) We have a physical design / reference architecture
that our team has put together. It is used for most of our modern
deployment zones... but money, politics, and agendas can get in the way
of consistency ;)
For example, we have Dell, IBM, SuperMicro, Quanta (regular and RackGo)
all in use in our deployment zones. I heard this morning that HP blades
are going to be added into that mix for some specialized deployments.
And that is just the compute and object storage hardware.
If you also take into account the network gear, it's just as varied.
Arista, Cisco, Juniper, and Quanta gear is all used in our deployments,
with lots of models of each vendor.
About the only thing that *is* consistent in our deployments is NAS. We
use NetApp filers for NFS/iSCSI volume management.
We are 100% heterogeneous, and we have no plans to become homogeneous
any time soon. At a large company like AT&T, frankly there are just too
many people involved in both the procurement and
certification/validation of hardware to ever expect a single-vendor
environment to be a reality.
> - if you have multiple hypervisor hardware configurations, do you
> expose that to your users (e.g. using different flavors to let users
> choose which servers to run vms on) - a story for that might be high
> performance computing vs densely packed vms.
We don't vary our hypervisor offering (to date). We offer our tenants
some scaling improvements, particularly for the Windows tenants, using
GridCentric's technology, but everything to date has been KVM. No custom
flavors.
> Thanks!
> -Rob
>
More information about the OpenStack-operators
mailing list