[openstack-dev] [requirements] [infra] speeding up gate runs?
Sean Dague
sean at dague.net
Wed Nov 4 20:34:26 UTC 2015
On 11/04/2015 03:27 PM, Clark Boylan wrote:
> On Wed, Nov 4, 2015, at 09:14 AM, Sean Dague wrote:
>> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
>>> On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
>>>> On 11/04/2015 06:47 AM, Sean Dague wrote:
>>> [...]
>>>>> Is there a nodepool cache strategy where we could pre build these? A 25%
>>>>> performance win comes out the other side if there is a strategy here.
>>>>
>>>> python wheel repo could help maybe?
>>>
>>> That's along the lines of how I expect we'd need to solve it.
>>> Basically add a new DIB element to openstack-infra/project-config in
>>> nodepool/elements (or extend the cache-devstack element already
>>> there) to figure out which version(s) it needs to prebuild and then
>>> populate a wheelhouse which can be leveraged by the jobs running on
>>> the resulting diskimage. The test scripts in the
>>> openstack/requirements repo may already have much of this logic
>>> implemented for the purpose of testing that we can build sane wheels
>>> of all our requirements.
>>>
>>> This of course misses situations where the requirements change and
>>> the diskimages haven't been rebuilt or in jobs testing proposed
>>> changes which explicitly alter these requirements, but could be
>>> augmented by similar mechanisms in devstack itself to avoid building
>>> them more than once.
>>
>> Ok, so given that pip automatically builds a local wheel cache now when
>> it installs this... is it as simple as
>> https://review.openstack.org/#/c/241692/ ?
> It is not that simple and this change will probably need to be reverted.
> We don't install the build deps for these packages during the dib run.
> We only add them to the appropriate apt/yum caches. This means that the
> image builds will start to fail because lxml won't find libxml2-dev and
> whatever other headers packages it needs in order to link against the
> appropriate libs.
>
> The issue here is we do our best to force devstack to do the work at run
> time to make sure that devstack-gate or our images aren't masking some
> bug or become a required part of the devstack process. This means that
> none of these packages are installed and won't be available to the pip
> install.
This seems like incorrect logic. We should test devstack can do all the
things on a devstack change, not on every neutron / trove / nova change.
I'm fine if we want to have a slow version of this for devstack testing
which starts from a massively stripped down state, but for the 99% of
patches that aren't devstack changes, this seems like overkill.
> We have already had to revert a similar change in the past and at the
> time the basic agreement was we should go back to building wheel package
> mirrors that jobs could take advantage of. That work floundered due to a
> lack of reviews, but I still think that is the correct way to solve this
> problem. Basic idea for that is to have some periodic jobs build a
> distro/arch/release specific wheel cache then rsync that over to all our
> pypi mirrors for use by the jobs.
>
> Clark
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Sean Dague
http://dague.net
More information about the OpenStack-dev
mailing list