[openstack-dev] [Ironic] functional (aka integration) gate testing
Elizabeth Krumbach Joseph
lyz at princessleia.com
Fri Dec 6 07:00:59 UTC 2013
On Tue, Dec 3, 2013 at 7:32 AM, Vladimir Kozhukalov
<vkozhukalov at mirantis.com> wrote:
> We are going to make integration testing gate scheme for Ironic and we've
> investigated several cases which are actual for TripleO.
It's great to see more work here, thanks!
> 1) https://etherpad.openstack.org/p/tripleo-test-cluster
> This is the newest and most advanced initiative. It is something like "test
> environment on demand". It is still not ready to use.
Right, that's being actively worked on every day right now by a small
team of us and our goal is to have something actually working soon
(right now all we have is a super simple experimental check that
doesn't do anything useful). However, while it is part of
Infrastructure's nodepool, our plan is to use donated racks (currently
one from HP, one incoming from Red Hat) to conduct these tests, not
multi-node testing on our current providers.
> 2) https://github.com/openstack-infra/tripleo-ci
> This project seems not to be actively used at the moment. It contains
> toci_gate_test.sh, but this script is empty and is used as a gate hook. It
> is supposed that it will then implement the whole gate testing logic using
> "test env on demand" (see previous point).
> This project also has some shell code which is used to manage emulated bare
> metal environments. It is something like prepare libvirt VM xml and launch
> VM using virsh (nothing special).
It is actually active, Dan Prince is currently working on syncing it
up with TripleO's devtest.sh script (see item 3) so it may go away as
a thing on its own at some point. Some of it is what we'll be using
above in item 1.
> 3) https://github.com/openstack/tripleo-incubator/blob/master/scripts (aka
> devtest)
> This is a set of shell scripts which are intended to reproduce the whole
> TripleO flow (seed, undercloud, overcloud). It is supposed to be used to
> perform testing actions (including gate tests).
> Documentation is available
> http://docs.openstack.org/developer/tripleo-incubator/devtest.html
>
> So, the situation looks like there is no fully working and mature scheme at
> the moment.
>
> My suggestion is to start from creating empty gate test flow (like in
> tripleo-ci). Then we can write some code implementing some testing logic. It
> is possible even before conductor manager is ready. We can just directly
> import driver modules and test them in a functional (aka integration)
> manner. As for managing emulated bare metal environments, here we can write
> (or copy from tripleo) some scripts for that (shell or python). What we
> actually need to be able to do is to launch one VM, then to install ironic
> on it, and then launch another VM and boot it via PXE from the first one. In
> the future we can use "environment on demand" scheme, when it is ready. So
> we can follow the same scenario as they use in TripleO.
>
> Besides, there is an idea about how to manage test environment using
> openstack itself. Right now nova can make VMs and it has advanced
> functionality for that. What it can NOT do is to boot them via PXE. There is
> a blueprint for that
> https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe.
So while I think there is overlap in our work with the TripleO clouds,
actual multi-node testing in the regular infrastructure is becoming
increasingly important for a lot of projects, it makes a lot of sense
to put effort there now. I'd be interested in keeping up with this
effort and I'm sure the rest of the infrastructure team would also
like to be kept in the loop with plans so we can make sure this all
will work reliably.
I also did some testing using qemu and LXC over the summer when we
were attempting to do TripleO the regular infrastructure, so I'm happy
to share notes related to where I ended up. You can find me on IRC as
pleia2 on freenode (in #tripleo, #openstack-infra, -dev...).
--
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com
More information about the OpenStack-dev
mailing list