<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann <span dir="ltr"><<a href="mailto:mriedem@us.ibm.com" target="_blank">mriedem@us.ibm.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font face="sans-serif"><br>
</font>
<br>
<br>
<br><tt><font>Dan Smith <<a href="mailto:dms@danplanet.com" target="_blank">dms@danplanet.com</a>> wrote on 10/10/2013
08:26:14 PM:<br>
<br>
> From: Dan Smith <<a href="mailto:dms@danplanet.com" target="_blank">dms@danplanet.com</a>></font></tt>
<br><tt><font>> To: OpenStack Development Mailing List <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>>,
</font></tt>
<br><tt><font>> Date: 10/10/2013 08:31 PM</font></tt>
<br><tt><font>> Subject: Re: [openstack-dev] [nova][powervm]
my notes from the <br>
> meeting on powervm CI</font></tt>
<br><div class="im"><tt><font>> <br>
> > 4. What is the max amount of time for us to report test results?
Dan<br>
> > didn't seem to think 48 hours would fly. :)<br>
> <br>
> Honestly, I think that 12 hours during peak times is the upper limit
of<br>
> what could be considered useful. If it's longer than that, many patches<br>
> could go into the tree without a vote, which defeats the point.</font></tt>
<br>
<br></div><tt><font>Yeah, I was just joking about the 48 hour thing, 12
hours seems excessive</font></tt>
<br><tt><font>but I guess that has happened when things are super
backed up with gate</font></tt>
<br><tt><font>issues and rechecks.</font></tt>
<br>
<br><tt><font>Right now things take about 4 hours, with Tempest
being around 1.5 hours</font></tt>
<br><tt><font>of that. The rest of the time is setup and install,
which includes heat</font></tt>
<br><tt><font>and ceilometer. So I guess that raises another question,
if we're really</font></tt>
<br><tt><font>setting this up right now because of nova, do we need
to have heat and</font></tt>
<br><tt><font>ceilometer installed and configured in the initial
delivery of this if</font></tt>
<br><tt><font>we're not going to run tempest tests against them
(we don't right now)?</font></tt>
<br></blockquote><div><br></div><div><br></div><div>In general the faster the better, and if things get to slow enough that we have to wait for powervm CI to report back, I </div><div>think its reasonable to go ahead and approve things without hearing back. In reality if you can report back in under 12 hours this will rarely happen (I think).</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br><tt><font>I think some aspect of the slow setup time is related
to DB2 and how</font></tt>
<br><tt><font>the migrations perform with some of that, but the
overall time is not</font></tt>
<br><tt><font>considerably different from when we were running this
with MySQL so</font></tt>
<br><tt><font>I'm reluctant to blame it all on DB2. I think
some of our topology</font></tt>
<br><tt><font>could have something to do with it too since the IVM
hypervisor is running</font></tt>
<br><tt><font>on a separate system and we are gated on how it's
performing at any</font></tt>
<br><tt><font>given time. I think that will be our biggest
challenge for the scale</font></tt>
<br><tt><font>issues with community CI.</font></tt>
<br><div class="im"><tt><font><br>
> <br>
> > 5. What are the minimum tests that need to run (excluding APIs
that the<br>
> > powervm driver doesn't currently support)?<br>
> > - smoke/gate/negative/whitebox/scenario/cli?
Right now we have<br>
> > 1152 tempest tests running, those are only within api/scenario/cli
and<br>
> > we don't run everything.<br>
> <br>
> I think that "a full run of tempest" should be required.
That said, if<br>
> there are things that the driver legitimately doesn't support, it
makes<br>
> sense to exclude those from the tempest run, otherwise it's not useful.<br></font></tt></div></blockquote><div><br></div><div>++</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="im"><tt><font>
> <br>
> I think you should publish the tempest config (or config script, or<br>
> patch, or whatever) that you're using so that we can see what it means<br>
> in terms of the coverage you're providing.</font></tt>
<br>
<br></div><tt><font>Just to clarify, do you mean publish what we are using
now or publish</font></tt>
<br><tt><font>once it's all working? I can certainly attach
our nose.cfg and</font></tt>
<br><tt><font>latest x-unit results xml file.</font></tt>
<br></blockquote><div> </div><div><div>We should publish all logs, similar to what we do for upstream (<a href="http://logs.openstack.org/96/48196/8/gate/gate-tempest-devstack-vm-full/70ae562/">http://logs.openstack.org/96/48196/8/gate/gate-tempest-devstack-vm-full/70ae562/</a>).</div>
</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="im"><tt><font><br>
> <br>
> > 6. Network service? We're running with openvswitch 1.10 today
so we<br>
> > probably want to continue with that if possible.<br>
> <br>
> Hmm, so that means neutron? AFAIK, not much of tempest runs with<br>
> Nova/Neutron.<br>
> <br>
> I kinda think that since nova-network is our default right now (for<br>
> better or worse) that the run should include that mode, especially
if<br>
> using neutron excludes a large portion of the tests.<br>
> <br>
> I think you said you're actually running a bunch of tempest right
now,<br>
> which conflicts with my understanding of neutron workiness. Can you
clarify?</font></tt>
<br>
<br></div><tt><font>Correct, we're running with neutron using the ovs
plugin. We basically have</font></tt>
<br><tt><font>the same issues that the neutron gate jobs have, which
is related to concurrency</font></tt>
<br><tt><font>issues and tenant isolation (we're doing the same
as devstack with neutron</font></tt>
<br><tt><font>in that we don't run tempest with tenant isolation).
We are running most</font></tt>
<br><tt><font>of the nova and most of the neutron API tests though
(we don't have all</font></tt>
<br><tt><font>of the neutron-dependent scenario tests working though,
probably more due</font></tt>
<br><tt><font>to incompetence in setting up neutron than anything
else).</font></tt>
<br><div class="im"><tt><font><br>
> <br>
> > 7. Cinder backend? We're running with the storwize driver but
we do we<br>
> > do about the remote v7000?<br>
> <br>
> Is there any reason not to just run with a local LVM setup like we
do in<br>
> the real gate? I mean, additional coverage for the v7000 driver is<br>
> great, but if it breaks and causes you to not have any coverage at
all,<br>
> that seems, like, bad to me :)</font></tt>
<br>
<br></div><tt><font>Yeah, I think we'd just run with a local LVM setup,
that's what we do for</font></tt>
<br><tt><font>x86_64 and s390x tempest runs. For whatever reason
we thought we'd do</font></tt>
<br><tt><font>storwize for our ppc64 runs, probably just to have
a matrix of coverage.</font></tt>
<br><div class=""><div class="h5"><tt><font><br>
> <br>
> > Again, just getting some thoughts out there to help us figure
out our<br>
> > goals for this, especially around 4 and 5.<br>
> <br>
> Yeah, thanks for starting this discussion!<br>
> <br>
> --Dan<br>
> <br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
> </font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><tt><font>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font><br>
> <br>
</font></tt></div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div></div>