<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 18, 2017 at 11:04 AM, Arx Cruz <span dir="ltr"><<a href="mailto:arxcruz@redhat.com" target="_blank">arxcruz@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On Tue, Apr 18, 2017 at 10:42 AM, Steven Hardy <span dir="ltr"><<a href="mailto:shardy@redhat.com" target="_blank">shardy@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-m_5109496432791245671gmail-">On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick wrote:<br>
> On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec <<a href="mailto:openstack@nemebean.com" target="_blank">openstack@nemebean.com</a>> wrote:<br>
> > Tempest isn't really either of those things. According to another message<br>
> > in this thread it takes around 15 minutes to run just the smoke tests.<br>
> > That's unacceptable for a lot of our CI jobs.<br>
><br></span></blockquote><div><br></div></span><div>I rather spend 15 minutes running tempest than add a regression or a new bug, which already happen in the past.<br><br></div></div></div></div></blockquote><div>The smoke tests might not be the best test selection anyway, you should pick some scenario which does<br></div><div>for example snapshot of images and volumes. yes, these are the slow ones, but they can run in parallel.<br><br></div><div>Very likely you do not really want to run all tempest test, but 10~20 minute time,<br> sounds reasonable for a sanity test.<br><br></div><div>The tempest config utility also should be extended by some parallel capability,<br></div><div>and should be able to use already downloaded (part of the image) resources.<br><br></div><div>Tempest/testr/subunit worker balance is not always the best,<br></div><div>technically would be possible to do dynamic balancing, but it would require a lot of work.<br></div><div>Let me know when it becomes the main concern, I can check what can/cannot be done. <br></div><div><br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-m_5109496432791245671gmail-">
> Ben, is the issue merely the time it takes? Is it the affect that time<br>
> taken has on hardware availability?<br>
<br>
</span>It's both, but the main constraint is the infra job timeout, which is about<br>
2.5hrs - if you look at our current jobs many regularly get close to (and<br>
sometimes exceed this), so we just don't have the time budget available to<br>
run exhasutive tests every commit.<br></blockquote><div><br></div></span><div>We have green light from infra to increase the job timeout to 5 hours, we do that in our periodic full tempest job.</div><span class="gmail-"><div></div></span></div></div></div></blockquote><div><br></div><div>Sounds good, but I am afraid it could hurt more than helping, it could delay other things get fixed by lot<br></div><div>especially if we got some extra flakiness, because of foobar.<br><br></div><div>You cannot have all possible tripleo configs on the gate anyway,<br></div><div>so something will pass which will require a quick fix. <br></div><div><br></div><div>IMHO the only real solution, is making the before test-run steps faster or shorter.<br></div><br></div><div class="gmail_quote">Do you have any option to start the tempest running jobs in a more developed state ?<br></div><div class="gmail_quote">I mean, having more things already done at the start time (images/snapshot) <br>and just do a fast upgrade at the beginning of the job.<br></div><div class="gmail_quote"><div><br></div><div>Openstack installation can be completed in a `fast` way (~minute) on RHEL/Fedora systems<br> after the yum steps, also if you are able to aggregate all yum step to single <br>command execution (transaction) you generally able to save a lot of time.<br> <br>There is plenty of things what can be made more efficient before the test run,<br></div><div>when you start considering everything evil which can be accounted for more than 30 sec<br></div><div>of time, this can happen soon.<br><br></div><div>For example just executing the cpython interpreter for the openstack commands is above 30 sec,<br></div><div>the work what they are doing can be done in much much faster way.<br><br></div><div>Lot of install steps actually does not depends on each other,<br></div><div>it allows more things to be done in parallel, we generally can have more core than Ghz.<br></div><div> <br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class="gmail-m_5109496432791245671gmail-"><br>
> Should we focus on how much testing we can get into N time period?<br>
> Then how do we decide an optimal N<br>
> for our constraints?<br>
<br>
</span>Well yeah, but that's pretty much how/why we ended up with pingtest, it's<br>
simple, fast, and provides an efficient way to do smoke tests, e.g creating<br>
just one heat resource is enough to prove multiple OpenStack services are<br>
running, as well as the DB/RPC etc etc.<br>
<span class="gmail-m_5109496432791245671gmail-"><br>
> I've been working on a full up functional test for OpenStack CI builds<br>
> for a long time now, it works but takes<br>
> more than 10 hours. IF you're interested in results kick through to<br>
> Kibana here [0]. Let me know off list if you<br>
> have any issues, the presentation of this data is all experimental still.<br>
<br>
</span>This kind of thing is great, and I'd support more exhaustive testing via<br>
periodic jobs etc, but the reality is we need to focus on "bang for buck"<br>
e.g the deepest possible coverage in the most minimal amount of time for<br>
our per-commit tests - we rely on the project gates to provide a full API<br>
surface test, and we need to focus on more basic things like "did the service<br>
start", and "is the API accessible". Simple crud operations on a subset of<br>
the API's is totally fine for this IMO, whether via pingtest or some other<br>
means.<br>
<br></blockquote><div><br></div></span><div>Right now we do have a periodic job running full tempest, with a few skips, and because of the lack of tempest tests in the patches, it's being pretty hard to keep it stable enough to have a 100% pass, and of course, also the installation very often fails (like in the last five days).</div><div>For example, [1] is the latest run we have in periodic job that we get results from tempest, and we have 114 failures that was caused by some new code/change, and I have no idea which one was, just looking at the failures, I can notice that smoke tests plus minimum basic scenario tests would catch these failures and the developer could fix it and make me happy :)</div><div>Now I have to spend several hours installing and debugging each one of those tests to identify where/why it fails.</div><div>Before this run, we got 100% pass, but unfortunately I don't have the results anymore, it was removed already from <a href="http://logs.openstack.org" target="_blank">logs.openstack.org</a></div><span class="gmail-"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Steve<br>
<div class="gmail-m_5109496432791245671gmail-HOEnZb"><div class="gmail-m_5109496432791245671gmail-h5"><br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</div></div></blockquote></span></div><br></div><div class="gmail_extra">[1] <a href="http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-nonha-tempest-oooq/0072651/logs/oooq/stackviz/#/stdin" target="_blank">http://logs.openstack.org/<wbr>periodic/periodic-tripleo-ci-<wbr>centos-7-ovb-nonha-tempest-<wbr>oooq/0072651/logs/oooq/<wbr>stackviz/#/stdin</a><br></div></div>
<br>______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
<br></blockquote></div><br></div></div>