[openstack-dev] Update on Zuul v3 Migration - and what to do about issues

milanisko k vetrisko at gmail.com
Wed Oct 11 07:21:02 UTC 2017


without any deeper investigation (yet) I'd like to share this patch status
Seems Zuul voted -1 even though all jobs were green...


2017-10-11 7:31 GMT+02:00 Rikimaru Honjo <honjo.rikimaru at po.ntt-tx.co.jp>:

> Hello,
> I'm trying to install & configure nodepool for Zuul v3 in my CI
> environment now.
> I use feature/zuulv3 branch.(ver. 0.4.1.dev430)
> I referred nodepool documents and infra/project-config tree.
> And I have some questions about this version of nodepool.
> 1)Is there the information about differences of configuration between
> nodepool for zuul v2 and v3?
>   Or, can I configure feature/zuulv3 basically same as lower version?
> 2)Below suggestion is wrote in README.rst.
>   But such file is not contained in the infra/system-config tree now.
>   Where is the file?
> Create or adapt a nodepool yaml file. You can adapt an infra/system-config
>> one, or fake.yaml as desired. Note that fake.yaml's settings won't Just
>> Work - consult ./modules/openstack_project/te
>> mplates/nodepool/nodepool.yaml.erb in the infra/system-config tree to
>> see a production config.
> 3)Can I use "images" key in "providers"?
>   I used "images" in nodepool ver.0.3.1, but below sample file doesn't use
> the key.
>   https://github.com/openstack-infra/nodepool/blob/feature/zuu
> lv3/tools/fake.yaml
> Best regards,
> On 2017/09/29 23:58, Monty Taylor wrote:
>> Hey everybody!
>> tl;dr - If you're having issues with your jobs, check the FAQ, this email
>> and followups on this thread for mentions of them. If it's an issue with
>> your job and you can spot it (bad config) just submit a patch with topic
>> 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you
>> send a follow up email to this thread so that we can ensure we've got them
>> all and so that others can see it too.
>> ** Zuul v3 Migration Status **
>> If you haven't noticed the Zuul v3 migration - awesome, that means it's
>> working perfectly for you.
>> If you have - sorry for the disruption. It turns out we have a REALLY
>> complicated array of job content you've all created. Hopefully the pain of
>> the moment will be offset by the ability for you to all take direct
>> ownership of your awesome content... so bear with us, your patience is
>> appreciated.
>> If you find yourself with some extra time on your hands while you wait on
>> something, you may find it helpful to read:
>>    https://docs.openstack.org/infra/manual/zuulv3.html
>> We're adding content to it as issues arise. Unfortunately, one of the
>> issues is that the infra manual publication job stopped working.
>> While the infra manual publication is being fixed, we're collecting FAQ
>> content for it in an etherpad:
>>    https://etherpad.openstack.org/p/zuulv3-migration-faq
>> If you have a job issue, check it first to see if we've got an entry for
>> it. Once manual publication is fixed, we'll update the etherpad to point to
>> the FAQ section of the manual.
>> ** Global Issues **
>> There are a number of outstanding issues that are being worked. As of
>> right now, there are a few major/systemic ones that we're looking in to
>> that are worth noting:
>> * Zuul Stalls
>> If you say to yourself "zuul doesn't seem to be doing anything, did I do
>> something wrong?", we're having an issue that jeblair and Shrews are
>> currently tracking down with intermittent connection issues in the backend
>> plumbing.
>> When it happens it's an across the board issue, so fixing it is our
>> number one priority.
>> * Incorrect node type
>> We've got reports of things running on trusty that should be running on
>> xenial. The job definitions look correct, so this is also under
>> investigation.
>> * Multinode jobs having POST FAILURE
>> There is a bug in the log collection trying to collect from all nodes
>> while the old jobs were designed to only collect from the 'primary'.
>> Patches are up to fix this and should be fixed soon.
>> * Branch Exclusions being ignored
>> This has been reported and its cause is currently unknown.
>> Thank you all again for your patience! This is a giant rollout with a
>> bunch of changes in it, so we really do appreciate everyone's understanding
>> as we work through it all.
>> Monty
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
> Rikimaru Honjo
> E-mail:honjo.rikimaru at po.ntt-tx.co.jp
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20171011/2cdfd1ba/attachment.html>

More information about the OpenStack-dev mailing list