[openstack-dev] [TripleO] test environment requirements
Robert Collins
robertc at robertcollins.net
Thu Mar 13 09:51:30 UTC 2014
So we already have pretty high requirements - its basically a 16G
workstation as minimum.
Specifically to test the full story:
- a seed VM
- an undercloud VM (bm deploy infra)
- 1 overcloud control VM
- 2 overcloud hypervisor VMs
====
5 VMs with 2+G RAM each.
To test the overcloud alone against the seed we save 1 VM, to skip the
overcloud we save 3.
However, as HA matures we're about to add 4 more VMs: we need a HA
control plane for both the under and overclouds:
- a seed VM
- 3 undercloud VMs (HA bm deploy infra)
- 3 overcloud control VMs (HA)
- 2 overcloud hypervisor VMs
====
9 VMs with 2+G RAM each == 18GB
What should we do about this?
A few thoughts to kick start discussion:
- use Ironic to test across multiple machines (involves tunnelling
brbm across machines, fairly easy)
- shrink the VM sizes (causes thrashing)
- tell folk to toughen up and get bigger machines (ahahahahaha, no)
- make the default configuration inline the hypervisors on the
overcloud with the control plane:
- a seed VM
- 3 undercloud VMs (HA bm deploy infra)
- 3 overcloud all-in-one VMs (HA)
====
7 VMs with 2+G RAM each == 14GB
I think its important that we exercise features like HA and live
migration regularly by developers, so I'm quite keen to have a fairly
solid systematic answer that will let us catch things like bad
firewall rules on the control node preventing network tunnelling
etc... e.g. we benefit the more things are split out like scale
deployments are. OTOH testing the micro-cloud that folk may start with
is also a really good idea....
-Rob
--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud
More information about the OpenStack-dev
mailing list