[diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset
Mark Goddard
mark at stackhpc.com
Thu Oct 8 08:08:15 UTC 2020
On Thu, 8 Oct 2020 at 05:20, Ian Wienand <iwienand at redhat.com> wrote:
>
> On Wed, Oct 07, 2020 at 05:09:56PM +0200, Riccardo Pittau wrote:
> > This is possible using utilities (e.g. yumdownloader) included in packages
> > still present in the ubuntu repositories, such as yum-utils and rpm.
> > Starting from Ubuntu focal, the yum-utils package has been removed from the
> > repositories because of lack of support of Python 2.x and there's no plan
> > to provide such support, at least to my knowledge.
>
> Yes, this is a problem for the "-minimal" elements that build an
> non-native chroot environment. Similar issues have occured with Suse
> and the zypper package manager not being available on the build host.
>
> The options I can see:
>
> - use the native build-host; i.e. build on centos as you described
>
> - the non-minimal, i.e. "centos" and "suse", for example, images might
> work under the current circumstances. They use the upsream ISO to
> create the initial chroot. These are generally bigger, and we've
> had stability issues in the past with the upstream images changing
> suddenly in various ways that were a maintenance headache.
>
> - use a container for dib. DIB doesn't have a specific container, but
> is part of the nodepool-builder container [1]. This is ultimately
> based on Debian buster [2] which has enough support to build
> everything ... for now. As noted this doesn't really solve the
> problem indefinitely, but certainly buys some time if you run dib
> out of that container (we could, of course, make a separate dib
> container; but it would be basically the same just without nodepool
> in it). This is what OpenDev production is using now, and all the
> CI is ultimately based on this container environment.
If this could be wrapped up in a DIB-like command, this seems the most
flexible to me.
>
> - As clarkb has mentioned, probably the most promising alternative is
> to use the upstream container images as the basis for the initial
> chroot environments. jeblair has done most of this work with [3].
> I'm fiddling with it to merge to master and see what's up ... I feel
> like maybe there were bootloader issues, although the basic
> extraction was working. This will allow the effort put into
> existing elements to not be lost.
Initial reaction is that this would suffer from the same problems as
using a cloud image as the base, but worse. Container images are seen
as disposable, and who knows what measures might have been taken to
reduce their size and disable/remove the init system?
>
> If I had to pick; I'd probably say that using the nodepool-builder
> container is the best path. That has the most momentum behind it
> because it's used for the OpenDev image builds. As we work on the
> container-image base elements, this work will be deployed into the
> container (meaning the container is less reliant on the underlying
> version of Debian) and you can switch to them as appropriate.
>
> -i
>
> [1] https://hub.docker.com/r/zuul/nodepool-builder
> [2] https://opendev.org/opendev/system-config/src/branch/master/docker/python-base/Dockerfile#L17
> [3] https://review.opendev.org/#/c/700083/
>
>
More information about the openstack-discuss
mailing list