[openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror
whayutin at redhat.com
Mon May 14 17:11:14 UTC 2018
On Mon, May 14, 2018 at 12:08 PM Clark Boylan <cboylan at sapwetik.org> wrote:
> On Mon, May 14, 2018, at 8:57 AM, Wesley Hayutin wrote:
> > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley <fungi at yuggoth.org>
> > > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote:
> > > [...]
> > >
> > > This _doesn't_ sound to me like a problem with how we've designed
> > > our infrastructure, unless there are additional details you're
> > > omitting.
> > So the only thing out of our control is the package set on the base
> > nodepool image.
> > If that suddenly gets updated with too many packages, then we have to
> > scramble to ensure the images and containers are also udpated.
> > If there is a breaking change in the nodepool image for example [a], we
> > have to react to and fix that as well.
> Aren't the container images independent of the hosting platform (eg what
> infra hosts)? I'm not sure I understand why the host platform updating
> implies all the container images must also be updated.
You make a fine point here, I think as with anything there are some bits
that are still being worked on. At this moment it's my understanding that
pacemaker and possibly a few others components are not 100% containerized
atm. I'm not an expert in the subject and my understanding may not be
correct. Untill you are 100% containerized there may still be some
dependencies on the base image and an impact from changes.
> > > It sounds like a problem with how the jobs are designed
> > > and expectations around distros slowly trickling package updates
> > > into the series without occasional larger bursts of package deltas.
> > > I'd like to understand more about why you upgrade packages inside
> > > your externally-produced container images at job runtime at all,
> > > rather than relying on the package versions baked into them.
> > We do that to ensure the gerrit review itself and it's dependencies are
> > built via rpm and injected into the build.
> > If we did not do this the job would not be testing the change at all.
> > This is a result of being a package based deployment for better or
> You'd only need to do that for the change in review, not the entire system
Correct there is no intention of updating the entire distribution in run
time, the intent is to have as much updated in our jobs that build the
containers and images.
Only the rpm built zuul change should be included in the update, however
some zuul changes require a CentOS base package that was not previously
installed on the container e.g. a new python dependency introduced in a
zuul change. Previously we had not enabled any CentOS repos in the
container update, but found that was not viable 100% of the time.
We have a change to further limit the scope of the update which should help
, especialy when facing a minor version update.
> > > Our automation doesn't know that there's a difference between
> > > packages which were part of CentOS 7.4 and 7.5 any more than it
> > > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3.
> > > Even if we somehow managed to pause our CentOS image updates
> > > immediately prior to 7.5, jobs would still try to upgrade those
> > > 7.4-based images to the 7.5 packages in our mirror, right?
> > >
> > Understood, I suspect this will become a more widespread issue as
> > more projects start to use containers ( not sure ). It's my
> > that
> > there are some mechanisms in place to pin packages in the centos nodepool
> > image so
> > there has been some thoughts generally in the area of this issue.
> Again, I think we need to understand why containers would make this worse
> not better. Seems like the big feature everyone talks about when it comes
> to containers is isolating packaging whether that be python packages so
> that nova and glance can use a different version of oslo or cohabitating
> software that would otherwise conflict. Why do the packages on the host
> platform so strongly impact your container package lists?
I'll let others comment on that, however my thought is you don't move from
A -> Z in one step and containers do not make everything easier
immediately. Like most things, it takes a little time.
> > TripleO may be the exception to the rule here and that is fine, I'm more
> > interested in exploring
> > the possibilities of delivering updates in a staged fashion than
> > I don't have insight into
> > what the possibilities are, or if other projects have similiar issues or
> > requests. Perhaps the TripleO
> > project could share the details of our job workflow with the community
> > this would make more sense.
> > I appreciate your time, effort and thoughts you have shared in the
> > > --
> > > Jeremy Stanley
> > >
> > [a] https://bugs.launchpad.net/tripleo/+bug/1770298
> I think understanding the questions above may be the important aspect of
> understanding what the underlying issue is here and how we might address it.
Thanks Clark, let me know if I did not get everything on your list there.
Thanks again for your time.
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev