[openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

Attila Fazekas afazekas at redhat.com
Wed Sep 20 13:17:28 UTC 2017


On Wed, Sep 20, 2017 at 3:11 AM, Ian Wienand <iwienand at redhat.com> wrote:

> On 09/20/2017 09:30 AM, David Moreau Simard wrote:
>
>> At what point does it become beneficial to build more than one image per
>> OS
>> that is more aggressively tuned/optimized for a particular purpose ?
>>
>
> ... and we can put -dsvm- in the jobs names to indicate it should run
> on these nodes :)
>
> Older hands than myself will remember even more issues, but the
> "thicker" the base-image has been has traditionally just lead to a lot
> more corners for corner-cases can hide in.  We saw this all the time
> with "snapshot" images where we'd be based on upstream images that
> would change ever so slightly and break things, leading to
> diskimage-builder and the -minimal build approach.
>
> That said, in a zuulv3 world where we are not caching all git and have
> considerably smaller images, a nodepool that has a scheduler that
> accounts for flavor sizes and could conceivably understand similar for
> images, and where we're building with discrete elements that could
> "bolt-on" things like a list-of-packages install sanely to daily
> builds ... it's not impossible to imagine.
>
> -i


The problem is these package install steps are not really I/O bottle-necked
in most cases,
even with a regular DSL speed you can  frequently see
the decompress and the post config steps takes more time.

The site-local cache/mirror has visible benefit, but does not eliminates
the issues.

The main enemy is the single threaded CPU intensive operation in most
install/config related script,
the 2th most common issue is serially requesting high latency steps, which
does not reaches neither
the CPU or I/O possibilities at the end.

The fat images are generally cheaper even if your cloud has only 1Gb
Ethernet for image transfer.
You gain more by baking the packages into the image than the 1GbE can steal
from you, because
you also save time what would be loosen on CPU intensive operations or from
random disk access.

It is safe to add all distro packages used  by devstack to the cloud image.

Historically we had issues with some base image packages which presence
changed the
behavior of some component ,for example firewalld vs. libvirt (likely an
already solved issue),
these packages got explicitly removed by devstack in case of necessary.
Those packages not requested by devstack !

Fedora/Centos also has/had issues with overlapping with pypi packages on
main filesystem,
(too long story, pointing fingers ..) , generally not a good idea to add
packages from pypi to
an image which content might be overridden by the distro's package manager.

The distribution package install time delays the gate response,
when the slowest ruining job delayed by this, than the whole response is
delayed.

It Is an user facing latency issue, which should be solved even if the cost
would be higher.

The image building was the good old working solution and unless the image
build
become a super expensive thing, this is still the best option.

site-local mirror also expected to help making the image build step(s)
faster and safer.

The other option is the ready scripts.


>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170920/37602ed1/attachment.html>


More information about the OpenStack-dev mailing list