[Openstack-operators] How do you keep your test env sync'ed to production?

Joe Topjian joe at topjian.net
Thu Jan 9 06:56:50 UTC 2014


Hi Jon,

Our test environment consists of 8 bare-metal servers. One server provides
Cobbler and Puppet services and the other 7 are used for OpenStack. The
entire OpenStack configuration is managed through Puppet.

In the past, we would re-provision the bare-metal servers with a fresh OS
and then run Puppet to deploy OpenStack, but then we realized that we could
do everything nested inside an OpenStack environment. So the bare-metal
hardware is now a separate OpenStack environment which allows us to test
several different projects' OpenStack configurations under one main test
cloud.

To create a test environment, we have Vagrant profiles for cloud
controllers, compute nodes, and swift nodes. Vagrant launches vms inside
the test cloud, does any needed bootstrapping (basically emulating
Cobbler's post-install hooks), the vms contact Puppet, and are then
configured with the latest production configuration.

Our production network configurations have all been able to work in a
nested OpenStack environment. Sometimes this creates nested VLANs, but we
either ignore it or raise the MTU.

The only thing that we cannot reliably reproduce in this environment is the
NetApp appliances that one production site runs. Unless NetApp would like
to send us a free 3240? =)

The other caveat is that launching instances has to be done with qemu
software virtualization. So far this has been fine. As long as an instance
can boot, receive an IP, attach a volume, etc, then everything should be
fine. We have not run into a case where an OpenStack configuration has
caused a certain type of image to fail.

You make a good point about working with a production database dump. I'll
look into this for our environment.


Each project's OpenStack configuration, like I mentioned, is stored in
Puppet. The Puppet data is stored in a gitolite-based git repository. We
use Hiera to separate the production and testing environment data (such as
IP subnets, false passwords, etc).

Right now, our git workflow is very basic. We work out of the master branch
and commit any changes once tested. We might try out r10k.

A side project of ours is to work on using Jenkins or a git hook to
automatically provision a set of vms to test a commit.

Hope that helps. Let me know if you have any questions regarding a specific
area.

Joe



On Wed, Jan 8, 2014 at 2:28 PM, Jonathan Proulx <jon at jonproulx.com> wrote:

> Hi All,
>
> My test environment is a couple of instances within my production
> cloud.  Mostly I've used this to test my config management (puppet
> community modules) in a sandbox especially around upgrades.
>
> In my first attempt at Grizzly -> Havana upgrade this worked fine in
> the clean empty sandbox, but tripped over a know and patched bug
> related to DB migrations on systems that had originally been installed
> pre-Grizzly (mine started as Essex).
>
> By pulling my production DB into my test environment I managed to
> prove the released patch worked for the first issue I saw and discover
> three more I was able to over come (still working on defining where
> the bugs are to report).
>
> But now of course the test database is full of references to the
> production compute nodes (though I did fix the endpoints to refer to
> the test endpoints), so I can't *really* test launching new instances.
> I'm also scratching my head over how to model our networking, but
> that's pretty site specific and I haven't been thinking long.
>
> An other issue I can't think how to reproduce in testing involves a
> case from our essex->folsom upgrade.  I forget the details as it was
> more than a year ago now, but it only involved instances that we
> running through the transition period and only for one of our
> projects/tenants.  That's likely an outlier that can never be
> deterministically caught.
>
> But these two really get me thinking about just how close I can get
> testing to keep step with production.
>
> Do you have a test environment?
>
> Is it in your cloud or dedicated hardware?
>
> How do you keep it close to production and how close do you manage?
>
> -Jon
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140108/467cdcd6/attachment.html>


More information about the OpenStack-operators mailing list