[Openstack-operators] How do you keep your test env sync'ed to production?

Jay Pipes jaypipes at gmail.com
Thu Jan 9 17:47:23 UTC 2014

On Wed, 2014-01-08 at 23:56 -0700, Joe Topjian wrote:
> Hi Jon,
> Our test environment consists of 8 bare-metal servers. One server
> provides Cobbler and Puppet services and the other 7 are used for
> OpenStack. The entire OpenStack configuration is managed through
> Puppet.
> In the past, we would re-provision the bare-metal servers with a fresh
> OS and then run Puppet to deploy OpenStack, but then we realized that
> we could do everything nested inside an OpenStack environment. So the
> bare-metal hardware is now a separate OpenStack environment which
> allows us to test several different projects' OpenStack configurations
> under one main test cloud.

This sounds virtually identical to the Triple-O project's architecture
of an undercloud and an overcloud -- and it makes sense, since a driving
factor behind Triple-O is continuous deployment testing. :)

I'm curious, though... how often do you change/update the *undercloud*
-- the OpenStack installation that deploys the bare-metal machines? Is
it just the *overcloud* that follows the master branches, or do you
update the undercloud at a similar pace?

> To create a test environment, we have Vagrant profiles for cloud
> controllers, compute nodes, and swift nodes. Vagrant launches vms
> inside the test cloud, does any needed bootstrapping (basically
> emulating Cobbler's post-install hooks), the vms contact Puppet, and
> are then configured with the latest production configuration.
> Our production network configurations have all been able to work in a
> nested OpenStack environment. Sometimes this creates nested VLANs, but
> we either ignore it or raise the MTU.

Are you using Neutron? If so, what setup are you using for the
undercloud and the overcloud? Are you using a flat or Vlan model for the
undercloud and an SDN model (GRE? VxLAN?) for the overcloud?

> The only thing that we cannot reliably reproduce in this environment
> is the NetApp appliances that one production site runs. Unless NetApp
> would like to send us a free 3240? =)

LOL. We had the same issue in our testing environments at AT&T. :)
Strangely, it was difficult to find a 3270 filer lying around that
nobody wanted ;) So, instead we just tested on the iSCSI basic drivers
for Cinder.

> The other caveat is that launching instances has to be done with qemu
> software virtualization. So far this has been fine. As long as an
> instance can boot, receive an IP, attach a volume, etc, then
> everything should be fine. We have not run into a case where an
> OpenStack configuration has caused a certain type of image to fail.

Interesting. Did you ever investigate using LXC containers instead of
KVM/Qemu VMs for your testing environment OpenStack instances?

> You make a good point about working with a production database dump.
> I'll look into this for our environment.

++ You may want to check with Mikal Still (cc'd) about linking into
turbo-hipster for testing schema migrations on "realistic" databases,


More information about the OpenStack-operators mailing list