[openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

Emilien Macchi emilien at redhat.com
Tue May 23 15:58:35 UTC 2017


On Mon, May 22, 2017 at 3:43 PM, Doug Hellmann <doug at doughellmann.com> wrote:
> Excerpts from Jeremy Stanley's message of 2017-05-22 19:16:34 +0000:
>> On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:
>> > On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley <fungi at yuggoth.org> wrote:
>> > > On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
>> > > [...]
>> > >> We ran into this for the puppet-module-build check job so I created a
>> > >> puppet-agent-install builder.  Perhaps the job needs that added to it
>> > > [...]
>> > >
>> > > Problem here being these repos share the common tarball jobs used
>> > > for generating python sdists, with a little custom logic baked into
>> > > run-tarball.sh[*] for detecting and adjusting when the repo is for a
>> > > Puppet module. I think this highlights the need to create custom
>> > > tarball jobs for Puppet modules, preferably by abstracting this
>> > > custom logic into a new JJB builder.
>> >
>> > I assume you mean a problem if we added this builder to the job
>> > and it fails for some reason thus impacting the python jobs?
>>
>> My concern is more that it increases complexity by further embedding
>> package selection and installation choices into that already complex
>> script. We'd (Infra team) like to get more of the logic out of that
>> random pile of shell scripts and directly into job definitions
>> instead. For one thing, those scripts are only updated when we
>> regenerate our nodepool images (at best once a day) and leads to
>> significant job inconsistencies if we have image upload failures in
>> some providers but not others. In contrast, job configurations are
>> updated nearly instantly (and can even be self-tested in many cases
>> once we're on Zuul v3).
>>
>> > As far as adding to the builder to the job that's not really a
>> > problem and wouldn't change those jobs as they don't reference the
>> > installed puppet executable.
>>
>> It does risk further destabilizing the generic tarball jobs by
>> introducing more outside dependencies which will only be used by a
>> scant handful of the projects running them.
>>
>> > The problem I have with putting this in the .sh is that it becomes
>> > yet another place where we're doing this package installation (we
>> > already do it in puppet openstack in
>> > puppet-openstack-integration). I originally proposed the builder
>> > because it could be reused if a job requires puppet be available.
>> > ie. this case. I'd rather not do what we do in the builder in a
>> > shell script in the job and it seems like this is making it more
>> > complicated than it needs to be when we have to manage this in the
>> > long term.
>>
>> Agreed, I'm saying a builder which installs an unnecessary Puppet
>> toolchain for the generic tarball jobs is not something we'd want,
>> but it would be pretty trivial to make puppet-specific tarball jobs
>> which do use that builder (and has the added benefit that
>> Puppet-specific logic can be moved _out_ of run-tarballs.sh and into
>> your job configuration instead at that point).
>
> That approach makes sense.
>
> When the new job template is set up, let me know so I can add it to the
> release repo validation as a known way to release things.

https://review.openstack.org/467294

Any feedback is welcome,

Thanks!

> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi



More information about the OpenStack-dev mailing list