[tripleo] docker.io rate limiting
Greetings, Some of you have contacted me regarding the recent news regarding docker.io's new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io from our upstream workflows and avoid any rate limiting issues. We will continue to upload containers to docker.io for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors. Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io from our upstream workflow. [2]. Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time. Thanks 0/ [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merge...)
Sorry for the stupid question, but maybe there is some parameter for tripleo deployment not to generate and download images from docker io each time? since I already have it downloaded and working? Or, as I understand, I should be able to create my own snapshot of images and specify it as a source? On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin <whayutin@redhat.com> wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io's new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io from our upstream workflows and avoid any rate limiting issues.
We will continue to upload containers to docker.io for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io from our upstream workflow. [2].
Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time.
Thanks 0/
[1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merge...)
-- Ruslanas Gžibovskis +370 6030 7030
On Wed, Sep 2, 2020 at 8:18 AM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Sorry for the stupid question, but maybe there is some parameter for tripleo deployment not to generate and download images from docker io each time? since I already have it downloaded and working?
Or, as I understand, I should be able to create my own snapshot of images and specify it as a source?
Yes, as a user you can download the images and push into your own local registry and then specify your custom registry in the container-prepare-parameters.yaml file.
On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin <whayutin@redhat.com> wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io's new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io from our upstream workflows and avoid any rate limiting issues.
We will continue to upload containers to docker.io for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io from our upstream workflow. [2].
Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time.
Thanks 0/
[1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merge...)
-- Ruslanas Gžibovskis +370 6030 7030
On 9/2/20 9:33 PM, Wesley Hayutin wrote:
On Wed, Sep 2, 2020 at 8:18 AM Ruslanas Gžibovskis <ruslanas@lpic.lt <mailto:ruslanas@lpic.lt>> wrote:
Sorry for the stupid question, but maybe there is some parameter for tripleo deployment not to generate and download images from docker io each time? since I already have it downloaded and working?
Or, as I understand, I should be able to create my own snapshot of images and specify it as a source?
Yes, as a user you can download the images and push into your own local registry and then specify your custom registry in the container-prepare-parameters.yaml file.
that's basically what I'm doing at home, in order to avoid the network overhead when deploying N times. Now, there's a new thing with github that could also be leveraged at some point: https://github.blog/2020-09-01-introducing-github-container-registry/ Though the solution proposed by Wes and his Team will be more efficient imho - fresh build of containers within CI makes perfectly sense. And using TCIB[1] for that task also provides a new layer of CI for this central tool, which is just about perfect! Cheers, C. [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployme...
On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin <whayutin@redhat.com <mailto:whayutin@redhat.com>> wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io <http://docker.io> from our upstream workflow. [2].
Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time.
Thanks 0/
[1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merge...)
-- Ruslanas Gžibovskis +370 6030 7030
-- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/
I am a complete noob in containers, and especially in images. is there a small "howto" get all OSP related images and (upload to local storage is podman pull docker.io/tripleou/*:current-tripelo ), but how to get a full list? Cause when I specify undercloud itself :) it do not have ceilometer-compute, but I have ceilometer disabled, so I believe this is why it did not download that image. but in general, as I understood, it checks all and then selects what it needs? On Thu, 3 Sep 2020 at 08:57, Cédric Jeanneret <cjeanner@redhat.com> wrote:
On 9/2/20 9:33 PM, Wesley Hayutin wrote:
On Wed, Sep 2, 2020 at 8:18 AM Ruslanas Gžibovskis <ruslanas@lpic.lt <mailto:ruslanas@lpic.lt>> wrote:
Sorry for the stupid question, but maybe there is some parameter for tripleo deployment not to generate and download images from docker io each time? since I already have it downloaded and working?
Or, as I understand, I should be able to create my own snapshot of images and specify it as a source?
Yes, as a user you can download the images and push into your own local registry and then specify your custom registry in the container-prepare-parameters.yaml file.
that's basically what I'm doing at home, in order to avoid the network overhead when deploying N times.
Now, there's a new thing with github that could also be leveraged at some point: https://github.blog/2020-09-01-introducing-github-container-registry/
Though the solution proposed by Wes and his Team will be more efficient imho - fresh build of containers within CI makes perfectly sense. And using TCIB[1] for that task also provides a new layer of CI for this central tool, which is just about perfect!
Cheers,
C.
[1]
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployme...
On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin <whayutin@redhat.com <mailto:whayutin@redhat.com>> wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io <http://docker.io> from our upstream workflow. [2].
Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time.
Thanks 0/
https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merge...)
-- Ruslanas Gžibovskis +370 6030 7030
-- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/
-- Ruslanas Gžibovskis +370 6030 7030
Hey Wes, stupid question: what about the molecule tests? Since they are running within containers (centos-8, centos-7, maybe/probably ubi8 soon), we might hit some limitations there.... Unless we're NOT using docker.io already? Cheers, C. On 9/2/20 1:54 PM, Wesley Hayutin wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io <http://docker.io> from our upstream workflow. [2].
Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time.
Thanks 0/
[1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merge...)
-- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/
On Thu, Sep 3, 2020 at 3:48 AM Cédric Jeanneret <cjeanner@redhat.com> wrote:
Hey Wes,
stupid question: what about the molecule tests? Since they are running within containers (centos-8, centos-7, maybe/probably ubi8 soon), we might hit some limitations there.... Unless we're NOT using docker.io already?
Cheers,
OK.. so easy answer 1. we're still going to push to docker.io 2. any content ci uses from docker.io will be mirrored in quay and the rdo registry including base images. So I would switch the molecule / tox config to use quay as soon as we have images there. I'm searching around for that code in tripleo-ansible and validations and it's not where I thought it was. Do you have pointers to where docker.io is configured. Thanks
C.
On 9/2/20 1:54 PM, Wesley Hayutin wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io <http://docker.io> from our upstream workflow. [2].
Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time.
Thanks 0/
[1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merge...)
-- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/
On 9/2/20 1:54 PM, Wesley Hayutin wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
thanks; I guess this will be a problem for the ceph containers as well
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
I don't think ceph found alternatives yet, but Guillaume or Dimitri might know more about it -- Giulio Fidente GPG KEY: 08D733BA
On Fri, Sep 4, 2020 at 7:23 AM Giulio Fidente <gfidente@redhat.com> wrote:
On 9/2/20 1:54 PM, Wesley Hayutin wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
thanks; I guess this will be a problem for the ceph containers as well
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
I don't think ceph found alternatives yet, but Guillaume or Dimitri might know more about it --
talk to Fulton.. I think we'll have ceph covered from a tripleo perspective. Not sure about anything else.
Giulio Fidente GPG KEY: 08D733BA
On Fri, Sep 4, 2020 at 12:13 PM Wesley Hayutin <whayutin@redhat.com> wrote:
On Fri, Sep 4, 2020 at 7:23 AM Giulio Fidente <gfidente@redhat.com> wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container
On 9/2/20 1:54 PM, Wesley Hayutin wrote: pull
rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
thanks; I guess this will be a problem for the ceph containers as well
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
I don't think ceph found alternatives yet, but Guillaume or Dimitri might know more about it --
talk to Fulton.. I think we'll have ceph covered from a tripleo perspective. Not sure about anything else.
Yes, thank you Wes for your help on the plan to cover the TripleO CI perspective. A thread similar to this one has been posted on ceph-dev [1] the outcome so far is that some Ceph projects are using quay.ceph.com to store temporary CI images to deal with the docker.io rate limits. As per an IRC conversation I had with Dimitri, ceph-ansible is not using quay.ceph.com but has made some changes to deal with current rate limits [2]. I expect they'll need to make further changes for November but my understanding is that they're still looking to push the authoritative copy of the Ceph container image [3] we use to docker.io. On the TripleO side we change that image rarely so provided it can be cached for CI jobs we should be safe. When we do change the image to the newer version we use a DNM patch [4] to pull it directly from docker. We could continue to do this as only that patch would be vulnerable to the rate limit. If we then see by way of the CI to the DNM patch that the new image is good, we can pursue getting it cached as the new image for TripleO CI Ceph jobs. One thing that's not clear to me is the mechanism to do this. John [1] https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/BYZOGN3Y3CJLY35QLDL... [2] https://github.com/ceph/ceph-container/blob/master/tests/tox.sh#L86-L110 [3] https://hub.docker.com/r/ceph/daemon [4] https://review.opendev.org/#/c/690036/
Giulio Fidente GPG KEY: 08D733BA
Hi, We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2]. Official ceph images will still be updated on docker.io. Note that from a ceph-ansible point of view, switching to the quay.ceph.io registry isn't enough to get rid of the docker.io registry when deploying with the Ceph dashboard enabled. The whole monitoring stack (alertmanager, prometheus, grafana and node-exporter) coming with the Ceph dashboard is still using docker.io by default [3][4][5][6]. As an alternative, you can use the official quay registry (quay.io) for altermanager, prometheus and node-exporter images [7] from the prometheus namespace like we're doing in [2]. Only the grafana container image will still be pulled from docker.io. Regards, Dimitri [1] https://quay.ceph.io/repository/ceph-ci/daemon?tab=tags [2] https://github.com/ceph/ceph-ansible/pull/5726 [3] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/default... [4] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/default... [5] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/default... [6] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/default... [7] https://quay.io/organization/prometheus On Fri, Sep 4, 2020 at 12:42 PM John Fulton <johfulto@redhat.com> wrote:
On Fri, Sep 4, 2020 at 12:13 PM Wesley Hayutin <whayutin@redhat.com> wrote:
On Fri, Sep 4, 2020 at 7:23 AM Giulio Fidente <gfidente@redhat.com> wrote:
Greetings,
Some of you have contacted me regarding the recent news regarding docker.io <http://docker.io>'s new policy with regards to container
On 9/2/20 1:54 PM, Wesley Hayutin wrote: pull
rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io <http://docker.io> from our upstream workflows and avoid any rate limiting issues.
thanks; I guess this will be a problem for the ceph containers as well
We will continue to upload containers to docker.io <http://docker.io> for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors.
I don't think ceph found alternatives yet, but Guillaume or Dimitri might know more about it --
talk to Fulton.. I think we'll have ceph covered from a tripleo perspective. Not sure about anything else.
Yes, thank you Wes for your help on the plan to cover the TripleO CI perspective. A thread similar to this one has been posted on ceph-dev [1] the outcome so far is that some Ceph projects are using quay.ceph.com to store temporary CI images to deal with the docker.io rate limits.
As per an IRC conversation I had with Dimitri, ceph-ansible is not using quay.ceph.com but has made some changes to deal with current rate limits [2]. I expect they'll need to make further changes for November but my understanding is that they're still looking to push the authoritative copy of the Ceph container image [3] we use to docker.io.
On the TripleO side we change that image rarely so provided it can be cached for CI jobs we should be safe. When we do change the image to the newer version we use a DNM patch [4] to pull it directly from docker. We could continue to do this as only that patch would be vulnerable to the rate limit. If we then see by way of the CI to the DNM patch that the new image is good, we can pursue getting it cached as the new image for TripleO CI Ceph jobs. One thing that's not clear to me is the mechanism to do this.
John
[1] https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/BYZOGN3Y3CJLY35QLDL... [2] https://github.com/ceph/ceph-container/blob/master/tests/tox.sh#L86-L110 [3] https://hub.docker.com/r/ceph/daemon [4] https://review.opendev.org/#/c/690036/
Giulio Fidente GPG KEY: 08D733BA
Hello, On Sat, Sep 5, 2020 at 3:44 AM Dimitri Savineau <dsavinea@redhat.com> wrote:
Hi,
We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2].
Official ceph images will still be updated on docker.io.
Note that from a ceph-ansible point of view, switching to the quay.ceph.io registry isn't enough to get rid of the docker.io registry when deploying with the Ceph dashboard enabled. The whole monitoring stack (alertmanager, prometheus, grafana and node-exporter) coming with the Ceph dashboard is still using docker.io by default [3][4][5][6].
As an alternative, you can use the official quay registry (quay.io) for altermanager, prometheus and node-exporter images [7] from the prometheus namespace like we're doing in [2].
Only the grafana container image will still be pulled from docker.io.
The app-sre team mirrors the grafana image from docker.io on quay. https://quay.io/repository/app-sre/grafana?tab=tags , we reuse the same in CI? I have proposed a patch on tripleo-common to switch to quay.io -> https://review.opendev.org/#/c/750119/ Thanks, Chandan Kumar
Hello Dimitri, On Mon, Sep 7, 2020 at 9:01 AM Chandan kumar <chkumar246@gmail.com> wrote:
Hello,
On Sat, Sep 5, 2020 at 3:44 AM Dimitri Savineau <dsavinea@redhat.com> wrote:
Hi,
We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2].
In TripleO side, daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 is used but this image is not available on quay.io registry <https://quay.ceph.io/repository/ceph-ci/daemon?tab=tags> But v4.0.13-stable-4.0-nautilus-centos-7-x86_64 is available there. Can we get daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 in quay.ceph.io registry? or we switch to v4.0.13-stable-4.0-nautilus-centos-7-x86_64 this tag?
Official ceph images will still be updated on docker.io.
Note that from a ceph-ansible point of view, switching to the quay.ceph.io registry isn't enough to get rid of the docker.io registry when deploying with the Ceph dashboard enabled. The whole monitoring stack (alertmanager, prometheus, grafana and node-exporter) coming with the Ceph dashboard is still using docker.io by default [3][4][5][6].
As an alternative, you can use the official quay registry (quay.io) for altermanager, prometheus and node-exporter images [7] from the prometheus namespace like we're doing in [2].
Only the grafana container image will still be pulled from docker.io.
The app-sre team mirrors the grafana image from docker.io on quay. https://quay.io/repository/app-sre/grafana?tab=tags , we reuse the same in CI?
I have proposed a patch on tripleo-common to switch to quay.io -> https://review.opendev.org/#/c/750119/
Thanks, Chandan Kumar
On 9/8/20 9:30 AM, Chandan kumar wrote:
Hello Dimitri,
On Mon, Sep 7, 2020 at 9:01 AM Chandan kumar <chkumar246@gmail.com> wrote:
Hello,
On Sat, Sep 5, 2020 at 3:44 AM Dimitri Savineau <dsavinea@redhat.com> wrote:
Hi,
We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2].
In TripleO side, daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 is used but this image is not available on quay.io registry <https://quay.ceph.io/repository/ceph-ci/daemon?tab=tags> But v4.0.13-stable-4.0-nautilus-centos-7-x86_64 is available there. Can we get daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 in quay.ceph.io registry? or we switch to v4.0.13-stable-4.0-nautilus-centos-7-x86_64 this tag? we can switch to the newer image version ... but what will in the future control which image is copied from docker.io to quay.io
this is the same question I had in https://review.opendev.org/#/c/750119/4 ; I guess we can continue the conversation there -- Giulio Fidente GPG KEY: 08D733BA
participants (7)
-
Chandan kumar
-
Cédric Jeanneret
-
Dimitri Savineau
-
Giulio Fidente
-
John Fulton
-
Ruslanas Gžibovskis
-
Wesley Hayutin