[openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

Joe Gordon joe.gordon0 at gmail.com
Fri Oct 11 02:39:37 UTC 2013


On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann <mriedem at us.ibm.com> wrote:

>
>
>
>
> Dan Smith <dms at danplanet.com> wrote on 10/10/2013 08:26:14 PM:
>
> > From: Dan Smith <dms at danplanet.com>
> > To: OpenStack Development Mailing List <
> openstack-dev at lists.openstack.org>,
> > Date: 10/10/2013 08:31 PM
> > Subject: Re: [openstack-dev] [nova][powervm] my notes from the
> > meeting on powervm CI
> >
> > > 4. What is the max amount of time for us to report test results?  Dan
> > > didn't seem to think 48 hours would fly. :)
> >
> > Honestly, I think that 12 hours during peak times is the upper limit of
> > what could be considered useful. If it's longer than that, many patches
> > could go into the tree without a vote, which defeats the point.
>
> Yeah, I was just joking about the 48 hour thing, 12 hours seems excessive
> but I guess that has happened when things are super backed up with gate
> issues and rechecks.
>
> Right now things take about 4 hours, with Tempest being around 1.5 hours
> of that. The rest of the time is setup and install, which includes heat
> and ceilometer. So I guess that raises another question, if we're really
> setting this up right now because of nova, do we need to have heat and
> ceilometer installed and configured in the initial delivery of this if
> we're not going to run tempest tests against them (we don't right now)?
>


In general the faster the better, and if things get to slow enough that we
have to wait for powervm CI to report back, I
think its reasonable to go ahead and approve things without hearing back.
 In reality if you can report back in under 12 hours this will rarely
happen (I think).


>
> I think some aspect of the slow setup time is related to DB2 and how
> the migrations perform with some of that, but the overall time is not
> considerably different from when we were running this with MySQL so
> I'm reluctant to blame it all on DB2.  I think some of our topology
> could have something to do with it too since the IVM hypervisor is running
> on a separate system and we are gated on how it's performing at any
> given time.  I think that will be our biggest challenge for the scale
> issues with community CI.
>
> >
> > > 5. What are the minimum tests that need to run (excluding APIs that the
> > > powervm driver doesn't currently support)?
> > >         - smoke/gate/negative/whitebox/scenario/cli?  Right now we have
> > > 1152 tempest tests running, those are only within api/scenario/cli and
> > > we don't run everything.
> >
> > I think that "a full run of tempest" should be required. That said, if
> > there are things that the driver legitimately doesn't support, it makes
> > sense to exclude those from the tempest run, otherwise it's not useful.
>

++



>  >
> > I think you should publish the tempest config (or config script, or
> > patch, or whatever) that you're using so that we can see what it means
> > in terms of the coverage you're providing.
>
> Just to clarify, do you mean publish what we are using now or publish
> once it's all working?  I can certainly attach our nose.cfg and
> latest x-unit results xml file.
>

We should publish all logs, similar to what we do for upstream (
http://logs.openstack.org/96/48196/8/gate/gate-tempest-devstack-vm-full/70ae562/
).



>
> >
> > > 6. Network service? We're running with openvswitch 1.10 today so we
> > > probably want to continue with that if possible.
> >
> > Hmm, so that means neutron? AFAIK, not much of tempest runs with
> > Nova/Neutron.
> >
> > I kinda think that since nova-network is our default right now (for
> > better or worse) that the run should include that mode, especially if
> > using neutron excludes a large portion of the tests.
> >
> > I think you said you're actually running a bunch of tempest right now,
> > which conflicts with my understanding of neutron workiness. Can you
> clarify?
>
> Correct, we're running with neutron using the ovs plugin. We basically have
> the same issues that the neutron gate jobs have, which is related to
> concurrency
> issues and tenant isolation (we're doing the same as devstack with neutron
> in that we don't run tempest with tenant isolation).  We are running most
> of the nova and most of the neutron API tests though (we don't have all
> of the neutron-dependent scenario tests working though, probably more due
> to incompetence in setting up neutron than anything else).
>
> >
> > > 7. Cinder backend? We're running with the storwize driver but we do we
> > > do about the remote v7000?
> >
> > Is there any reason not to just run with a local LVM setup like we do in
> > the real gate? I mean, additional coverage for the v7000 driver is
> > great, but if it breaks and causes you to not have any coverage at all,
> > that seems, like, bad to me :)
>
> Yeah, I think we'd just run with a local LVM setup, that's what we do for
> x86_64 and s390x tempest runs. For whatever reason we thought we'd do
> storwize for our ppc64 runs, probably just to have a matrix of coverage.
>
> >
> > > Again, just getting some thoughts out there to help us figure out our
> > > goals for this, especially around 4 and 5.
> >
> > Yeah, thanks for starting this discussion!
> >
> > --Dan
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131010/2d9d3eaf/attachment.html>


More information about the OpenStack-dev mailing list