[openstack-dev] [tripleo] [puppet] A week in CI

Emilien Macchi emilien at redhat.com
Mon Dec 19 23:28:57 UTC 2016


On Sat, Dec 17, 2016 at 4:30 PM, Emilien Macchi <emilien at redhat.com> wrote:
> This week was outstanding for CI and I found useful to share what
> happened in TripleO and Puppet CIs, and where we are now.
> TL;DR:
> - Puppet OpenStack: full green on ocata/newton/mitaka
> - TripleO CI: full green except ovb-nonha (introspection is failing)
>
> Detailed version:
>
> Closed issues:
>
> # nodepool slaves failing to boot
> https://bugs.launchpad.net/tripleo/+bug/1650503
> The issue has been solved in nodepool but a bug in the centos7 image
> with dracut was causing all nodes to fail at boot.
> Kudos to infra for their responsiveness as usual.
>
> # postci timeouts on ovb-ha and ovb-updates
> https://bugs.launchpad.net/tripleo/+bug/1649742
> We enabled SSL on the undercloud to resolve this problem:
> https://review.openstack.org/#/c/411514
> Thanks Ben for helping on this one!
>
> # Undercloud running out of disk space in CI
> https://bugs.launchpad.net/tripleo/+bug/1649615
> Kudos to Ben, Sagi, and Derek for your precious time, hopefully we
> won't run into this problem again thanks to the improvements we
> brought in tripleo-ci.
>
> # rh1 compute nodes not spawning vms correctly
> https://bugs.launchpad.net/tripleo/+bug/1649252
> Again, Sagi, Derek and Ben spent time on the infra to cleanup things
> and we could spawn VMs again.
>
> # CentOS 7.3 / qemu-kvm 2.6
> https://www.redhat.com/archives/rdo-list/2016-December/msg00028.html
> Fixed in TripleO and Puppet OpenStack, thanks to David and Alfredo (+
> reviewers).
> Alfredo, your help on Puppet OpenStack CI was outstanding this week!
>
>
>
> Bug still in progress:
>
> # CI: nonha jobs fails in introspection
> https://bugs.launchpad.net/tripleo/+bug/1609688/comments/8
> It sounds like Introspection is broken since December 16th, we haven't
> investigated much on this one yet. Any help from Mistral / Ironic
> folks is highly welcome.
> Could it be related to https://bugs.launchpad.net/tripleo/+bug/1649350 ?

So we had to revert a patch in puppet-mistral:
https://review.openstack.org/#/c/412602/

that broke us. It will require some work to see how to configure
Mistral / Ironic to work together correctly when authtoken is
configured.
To avoid this problem in the future, we added ovb-non ha job into
puppet-mistral gate: https://review.openstack.org/#/c/412601/

> # Ocata requires additional cellvs2 setup
> https://bugs.launchpad.net/tripleo/+bug/1649341
> Alex is still working on this one but we're very close of closing it.
> It will help us to promote packages in TripleO CI.

note: this is not a blocker but some ongoing work for Ocata.

>
> CI status now:
> - all jobs in Puppet OpenStack CI should be green and stable now,
> please report us any new problem on #puppet-openstack (feel free to
> ping me).
> - all TripleO scenarios and also OVB HA jobs are green.
> - TripleO OVB non-ha is red, because of the introspection problem.
> - TripleO OVB jobs for Mitaka are red, not investigated yet (I'll
> start on Monday, but feel free to help).

That's the next step, we need to find out why Mitaka jobs are failing
(I saw some pingtests issues when creating a VM from volume).

Also, Ben opened a bug for multinode timeouts:
https://bugs.launchpad.net/tripleo/+bug/1651267

Any help is welcome on the remaining work!
Thanks,

> Some links you need to bookmark:
> http://tripleo.org/cistatus.html
> http://status-tripleoci.rhcloud.com/
> http://dashboards.rdoproject.org/rdo-dev
>
> I probably missed some details, but here what we've been working on
> this week. Feel free to add more details on the bugs, and give any
> feedback.
> Again, I would like to thank all people involved in debugging,
> patching and reviewing fixes to unblock CI this week, it was not easy
> but we made it as a team!
>
> Enjoy the week-end,
> --
> Emilien Macchi



-- 
Emilien Macchi



More information about the OpenStack-dev mailing list