<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Oct 12, 2016 at 4:10 PM, Dmitry Tantsur <span dir="ltr"><<a href="mailto:dtantsur@redhat.com" target="_blank">dtantsur@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="gmail-">On 10/12/2016 03:01 PM, Vasyl Saienko wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
Hello Dmitry,<br>
<br>
Thanks for raising this question. I think the problem is deeper. There are a lot<br>
of use-cases that are not covered by our CI like cleaning, adoption etc...<br>
</blockquote>
<br></span>
This is nice, but here I'm trying to solve a pretty specific problem: we can't reasonably add more jobs to even cover all supported partitioning scenarios.<span class="gmail-"><br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
The main problem is that we need to change ironic configuration to apply<br>
specific use-case. Unfortunately tempest doesn't allow to change cloud<br>
configuration during tests run.<br>
<br>
<br>
Recently I've started working on PoC that should solve this problem [0]. The<br>
main idea is to have ability to change ironic configuration during single gate<br>
job run, and launch the same tempest tests after each configuration change.<br>
<br>
We can't change other components configuration as it will require reinstalling<br>
whole devstack, so launching flat network and multitenant network scenarios is<br>
not possible in single job.<br>
<br>
<br>
For example:<br>
<br>
1. Setup devstack with agent_ssh wholedisk ipxe configuration<br>
<br>
2. Run tempest tests<br>
<br>
3. Update localrc to use agent_ssh localboot image<br>
</blockquote>
<br></span>
For this particular example, my approach will be much, much faster, as all instances will be built in parallel.</blockquote><div> </div><div> One the gates we've using 7 VMs and we never boot all 7 nodes in parallel not sure how slow environment will be in this case.<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"> </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="gmail-">
<br>
4. Unstack ironic component only. Not whole devstack.<br>
<br>
5. Install/configure ironic component only<br>
<br>
6. Run tempest tests<br>
<br>
7. Repeat steps 3-6 with other Ironic-only configuration change.<br>
<br>
<br>
Running step 4,5 takes near 2-3 minutes.<br>
<br>
<br>
Below is an non-exhaustive list of configuration choices we could try to<br>
mix-and-match in single tempest run to have a maximal overall code coverage in a<br>
sibl:<br>
<br></span>
*<br>
<br>
cleaning enabled / disabled<br>
</blockquote>
<br>
This is the only valid example, for other cases you don't need a devstack update.<br></blockquote><div> </div><div>There are other use-cases like: portgroups, security groups, boot from volume which will require configuration changes.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
*<span class="gmail-"><br>
<br>
using pxe_* drivers / agent_* drivers<br>
<br></span>
*<br>
<br>
using netboot / localboot<br>
<br>
* using partitioned / wholedisk images<span class="gmail-"><br>
<br>
<br>
<br>
[0] <a href="https://review.openstack.org/#/c/369021/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/369021/</a><br>
<br>
<br>
<br>
<br>
On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur <<a href="mailto:dtantsur@redhat.com" target="_blank">dtantsur@redhat.com</a><br></span><div><div class="gmail-h5">
<mailto:<a href="mailto:dtantsur@redhat.com" target="_blank">dtantsur@redhat.com</a>>> wrote:<br>
<br>
Hi folks!<br>
<br>
I'd like to propose a plan on how to simultaneously extend the coverage of<br>
our jobs and reduce their number.<br>
<br>
Currently, we're running one instance per job. This was reasonable when the<br>
coreos-based IPA image was the default, but now with tinyipa we can run up<br>
to 7 instances (and actually do it in the grenade job). I suggest we use 6<br>
fake bm nodes to make a single CI job cover many scenarios.<br>
<br>
The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)<br>
to be more in sync with how 3rd party CI does it. A special configuration<br>
option will be used to enable multi-instance testing to avoid breaking 3rd<br>
party CI systems that are not ready for it.<br>
<br>
To ensure coverage, we'll only leave a required number of nodes "available",<br>
and deploy all instances in parallel.<br>
<br>
In the end, we'll have these jobs on ironic:<br>
gate-tempest-ironic-pxe_ipmito<wbr>ol-tinyipa<br>
gate-tempest-ironic-agent_ipmi<wbr>tool-tinyipa<br>
<br>
Each job will cover the following scenarious:<br>
* partition images:<br>
** with local boot:<br>
** 1. msdos partition table and BIOS boot<br>
** 2. GPT partition table and BIOS boot<br>
** 3. GPT partition table and UEFI boot <*><br>
** with netboot:<br>
** 4. msdos partition table and BIOS boot <**><br>
* whole disk images:<br>
* 5. with msdos partition table embedded and BIOS boot<br>
* 6. with GPT partition table embedded and UEFI boot <*><br>
<br></div></div></blockquote></blockquote><div> </div><div>Am I right that we need to increase number of tempest tests to the number of use-cases we are going to test per driver. To ensure that we using right node for each test, because partition scheme is defined in node properties and requires right image to be used.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div><div class="gmail-h5">
<*> - in the future, when we figure our UEFI testing<br>
<**> - we're moving away from defaulting to netboot, hence only one scenario<br>
<br>
I suggest creating the jobs for Newton and Ocata, and starting with Xenial<br>
right away.<br>
<br>
Any comments, ideas and suggestions are welcome.<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br></div></div>
<<a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">http://OpenStack-dev-request@<wbr>lists.openstack.org?subject:un<wbr>subscribe</a>><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
<<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cg<wbr>i-bin/mailman/listinfo/opensta<wbr>ck-dev</a>><span class="gmail-"><br>
<br>
<br>
<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
<br>
</span></blockquote><div class="gmail-HOEnZb"><div class="gmail-h5">
<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</div></div></blockquote></div><br></div></div>