[openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal
Bogdan Dobrelya
bdobreli at redhat.com
Tue May 15 15:31:07 UTC 2018
On 5/15/18 4:30 PM, James E. Blair wrote:
> Bogdan Dobrelya <bdobreli at redhat.com> writes:
>
>> Added a few more patches [0], [1] by the discussion results. PTAL folks.
>> Wrt remaining in the topic, I'd propose to give it a try and revert
>> it, if it proved to be worse than better.
>> Thank you for feedback!
>>
>> The next step could be reusing artifacts, like DLRN repos and
>> containers built for patches and hosted undercloud, in the consequent
>> pipelined jobs. But I'm not sure how to even approach that.
>>
>> [0] https://review.openstack.org/#/c/568536/
>> [1] https://review.openstack.org/#/c/568543/
>
> In order to use an artifact in a dependent job, you need to store it
> somewhere and retrieve it.
>
> In the parent job, I'd recommend storing the artifact on the log server
> (in an "artifacts/" directory) next to the job's logs. The log server
> is essentially a time-limited artifact repository keyed on the zuul
> build UUID.
>
> Pass the URL to the child job using the zuul_return Ansible module.
>
> Have the child job fetch it from the log server using the URL it gets.
>
> However, don't do that if the artifacts are very large -- more than a
> few MB -- we'll end up running out of space quickly.
>
> In that case, please volunteer some time to help the infra team set up a
> swift container to store these artifacts. We don't need to *run*
> swift -- we have clouds with swift already. We just need some help
> setting up accounts, secrets, and Ansible roles to use it from Zuul.
Thank you, that's a good proposal! So when we have done that upstream
infra swift setup for tripleo, the 1st step in the job dependency graph
may be using quickstart to do something like:
* check out testing depends-on things,
* build repos and all tripleo docker images from these repos,
* upload into a swift container, with an automatic expiration set, the
de-duplicated and compressed tarball created with something like:
# docker save $(docker images -q) | gzip -1 > all.tar.xz
(I expect it will be something like a 2G file)
* something similar for DLRN repos prolly, I'm not an expert for this part.
Then those stored artifacts to be picked up by the next step in the
graph, deploying undercloud and overcloud in the single step, like:
* fetch the swift containers with repos and container images
* docker load -i all.tar.xz
* populate images into a local registry, as usual
* something similar for the repos. Includes an offline yum update (we
already have a compressed repo, right? profit!)
* deploy UC
* deploy OC, if a job wants it
And if OC deployment brought into a separate step, we do not need local
registries, just 'docker load -i all.tar.xz' issued for overcloud nodes
should replace image prep workflows and registries, AFAICT. Not sure
with the repos for that case.
I wish to assist with the upstream infra swift setup for tripleo, and
that plan, just need a blessing and more hands from tripleo CI squad ;)
>
> -Jim
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Best regards,
Bogdan Dobrelya,
Irc #bogdando
More information about the OpenStack-dev
mailing list