[tripleo] Container image tooling roadmap
Hello Stackers, As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea is to build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for the TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area. To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7]. A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 - nova-base: + Kolla: 1.09 GB - new: 720 MB - nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB - keystone: + Kolla: 973 MB - new: 532 MB - memcached: + Kolla: 633 MB - new: 379 MB While the links shown are many, the actual volume of the proposed change is small, although the impact is massive: * With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. We're looking to chart a more reliable path forward, and making the TripleO user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out. We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned. Thanks, Kevin and Emilien [0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8] https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcd... Kevin Carter IRC: kecarter
Hello Stackers,
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea is to build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for the TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area.
To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7].
A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 kolla could also use the ubi8 image as a base you could select centos as the base image type and then pass the url to
- nova-base: + Kolla: 1.09 GB - new: 720 MB unless you are using layers fo rthe new iamge keep in mind you have to subtract the size of the base imag form the nova
On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: the ubi image and that should would work. base image to caluate how big it actully is so its acully only usein about 500MB if you are using layser for the ubi nova-base then these iamge are actully the same size the diffenrec is entirly coming for the reduction in the base image
- nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB again here you have to do the same arithmatich so 2.14-1.09 so this image is adding 1.05 GB of layers in the kolla case and the ubi version is adding 1.2GB so the ubi image is acully using more space assumign tis using layer if tis not then its 1.05GB vs 1.9GB and the kolla image still comes out better by an even larger margin - keystone: + Kolla: 973 MB - new: 532 MB
again here this si all in the deleta of the base image
- memcached: + Kolla: 633 MB - new: 379 MB
as is this so over all i think the ubi based iamges are using more or the same space then the kolla ones. they just have a smaller base image. so rather then doing this i think it makes more sense to just use the ubi iamge with the kolla build system unless you expect be be able to signicicalty reduce the size of the images more. based on size alone i tone se anny really benifit so far
While the links shown are many, the actual volume of the proposed change is small, although the impact is massive:
* With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. given customer likely have built up ther own template override files unless you also have a way to support that this is a breaking change to there workflow. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. again when you take into the account that kolla uses layers the image dont appear to be smaller and the nova-libvert image is actully bigger kolla gets large space saving buy using the copy on write layers that make up docker images to share common code so you can just look a the size fo indivigual iamages you have look at the size of the total set and compare those.
We're looking to chart a more reliable path forward, and making the TripleO user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out.
We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned.
honestly im not sure this is a benifty or something that should be done but it seam like ye have already decieded on a path. not the kolla image can also be made smaller then they currently are using multi state builds to remove and deps installed for buidl requirement or reducing the optional depencies. many for the deps currently installed are to supprot the different vender backedns. if we improve the bindep files in each project too group those depencies by the optional backedn kolla could easily be updated to use bindep for package installation. and the image would get even smaller.
Thanks,
Kevin and Emilien
[0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8]
https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcd...
Kevin Carter IRC: kecarter
On Fri, May 1, 2020 at 4:12 PM Sean Mooney <smooney@redhat.com> wrote:
Hello Stackers,
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea is to build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for the TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area.
To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7].
A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 kolla could also use the ubi8 image as a base you could select centos as the base image type and then pass the url to
On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: the ubi image and that should would work.
ubi8 is smaller, but it doesn't account for it all. We're likely not importing some other deps that make it into the normal base that perhaps we don't need. I'd still like to see rpm diffs to better understand if this savings is real.
- nova-base: + Kolla: 1.09 GB - new: 720 MB unless you are using layers fo rthe new iamge keep in mind you have to subtract the size of the base imag form the nova base image to caluate how big it actully is so its acully only usein about 500MB
if you are using layser for the ubi nova-base then these iamge are actully the same size the diffenrec is entirly coming for the reduction in the base image
- nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB again here you have to do the same arithmatich so 2.14-1.09 so this image is adding 1.05 GB of layers in the kolla case and the ubi version is adding 1.2GB so the ubi image is acully using more space assumign tis using layer if tis not then its 1.05GB vs 1.9GB and the kolla image still comes out better by an even larger margin - keystone: + Kolla: 973 MB - new: 532 MB
again here this si all in the deleta of the base image
- memcached: + Kolla: 633 MB - new: 379 MB
as is this
so over all i think the ubi based iamges are using more or the same space then the kolla ones. they just have a smaller base image. so rather then doing this i think it makes more sense to just use the ubi iamge with the kolla build system unless you expect be be able to signicicalty reduce the size of the images more.
based on size alone i tone se anny really benifit so far
While the links shown are many, the actual volume of the proposed change is small, although the impact is massive:
* With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. given customer likely have built up ther own template override files unless you also have a way to support that this is a breaking change to there workflow. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. again when you take into the account that kolla uses layers the image dont appear to be smaller and the nova-libvert image is actully bigger kolla gets large space saving buy using the copy on write layers that make up docker images to share common code so you can just look a the size fo indivigual iamages you have look at the size of the total set and compare those.
We're looking to chart a more reliable path forward, and making the TripleO user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out.
We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned.
honestly im not sure this is a benifty or something that should be done but it seam like ye have already decieded on a path. not the kolla image can also be made smaller then they currently are using multi state builds to remove and deps installed for buidl requirement or reducing the optional depencies.
many for the deps currently installed are to supprot the different vender backedns. if we improve the bindep files in each project too group those depencies by the optional backedn kolla could easily be updated to use bindep for package installation. and the image would get even smaller.
The issues that we (tripleo) have primarily run into are the expectations around versions (rabbitmq being the latest) and being able to deploy via source. Honestly if tripleo was going to support installations running source or alternative distros (which neither we currently plan to do), it would likely make sense to continue down the Kolla path. However we already end up doing so many overrides to the standard Kolla container configurations[0] that I'm not sure it really makes sense to continue. Additionally since we no longer support building via docker, we're basically using kolla as a glorified template engine to give us Dockerfiles. The proposal is to stop using Kolla as a glorified templating engine and actually just manage one that fits our needs. We're using a very specific path through the Kolla code and I'm uncertain if it's beneficial for either of us anymore. This also frees up kolla + kolla-ansible improve their integration and likely be able to make some of the tougher choices that they've brought up in the other email about the future of kolla. Personally I see this move as freeing up Kolla to be able to innovate and TripleO being able to simplify. As one of the few people who know how the container sausage is made in the TripleO world, I think it's likely for the best. [0] https://opendev.org/openstack/tripleo-common/src/branch/master/container-ima...
Thanks,
Kevin and Emilien
[0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8]
https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcd...
Kevin Carter IRC: kecarter
Kevin Carter IRC: kecarter On Fri, May 1, 2020 at 5:57 PM Alex Schultz <aschultz@redhat.com> wrote:
On Fri, May 1, 2020 at 4:12 PM Sean Mooney <smooney@redhat.com> wrote:
On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote:
Hello Stackers,
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea
build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for
TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area.
To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7].
A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 kolla could also use the ubi8 image as a base you could select centos as
is to the the base image type and then pass the url to
the ubi image and that should would work.
ubi8 is smaller, but it doesn't account for it all. We're likely not importing some other deps that make it into the normal base that perhaps we don't need. I'd still like to see rpm diffs to better understand if this savings is real.
+1 I think it would be a good exercise to produce an RPM diff for a set of images. Maybe just the ones we've already ported?
- nova-base: + Kolla: 1.09 GB - new: 720 MB unless you are using layers fo rthe new iamge keep in mind you have to subtract the size of the base imag form the nova base image to caluate how big it actully is so its acully only usein about 500MB
if you are using layser for the ubi nova-base then these iamge are actully the same size the diffenrec is entirly coming for the reduction in the base image
- nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB again here you have to do the same arithmatich so 2.14-1.09 so this image is adding 1.05 GB of layers in the kolla case and the ubi version is adding 1.2GB so the ubi image is acully using more space assumign tis using layer if tis not then its 1.05GB vs 1.9GB and the kolla image still comes out better by an even larger margin - keystone: + Kolla: 973 MB - new: 532 MB
again here this si all in the deleta of the base image
- memcached: + Kolla: 633 MB - new: 379 MB
as is this
so over all i think the ubi based iamges are using more or the same space then the kolla ones. they just have a smaller base image. so rather then doing this i think it makes more sense to just use the ubi iamge with the kolla build system unless you expect be be able to signicicalty reduce the size of the images more.
based on size alone i tone se anny really benifit so far
While the links shown are many, the actual volume of the proposed
change is
small, although the impact is massive:
* With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. given customer likely have built up ther own template override files unless you also have a way to support that this is a breaking change to there workflow. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. again when you take into the account that kolla uses layers the image dont appear to be smaller and the nova-libvert image is actully bigger kolla gets large space saving buy using the copy on write layers that make up docker images to share common code so you can just look a the size fo indivigual iamages you have look at the size of the total set and compare those.
We're looking to chart a more reliable path forward, and making the
user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out.
We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned. honestly im not sure this is a benifty or something that should be done but it seam like ye have already decieded on a
TripleO path. not the kolla image can also be made smaller then they currently are using multi state builds to remove and deps installed for buidl requirement or reducing the optional depencies.
many for the deps currently installed are to supprot the different vender backedns. if we improve the bindep files in each project too group those depencies by the optional backedn kolla could easily be updated to use bindep for package installation. and the image would get even smaller.
The issues that we (tripleo) have primarily run into are the expectations around versions (rabbitmq being the latest) and being able to deploy via source. Honestly if tripleo was going to support installations running source or alternative distros (which neither we currently plan to do), it would likely make sense to continue down the Kolla path. However we already end up doing so many overrides to the standard Kolla container configurations[0] that I'm not sure it really makes sense to continue. Additionally since we no longer support building via docker, we're basically using kolla as a glorified template engine to give us Dockerfiles. The proposal is to stop using Kolla as a glorified templating engine and actually just manage one that fits our needs. We're using a very specific path through the Kolla code and I'm uncertain if it's beneficial for either of us anymore. This also frees up kolla + kolla-ansible improve their integration and likely be able to make some of the tougher choices that they've brought up in the other email about the future of kolla.
Personally I see this move as freeing up Kolla to be able to innovate and TripleO being able to simplify. As one of the few people who know how the container sausage is made in the TripleO world, I think it's likely for the best.
[0] https://opendev.org/openstack/tripleo-common/src/branch/master/container-ima...
Thanks,
Kevin and Emilien
[0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8]
https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcd...
Kevin Carter IRC: kecarter
Sean, The size isn't the primary concern here, just a "small" bonus eventually. Kolla doesn't support ubi8; yes we could have done it but again this is going to be a bunch of work (like it was for RHEL8/centos8) that I'm not sure is worth it if the only consumers are TripleO folks at this point. What Alex described is our major motivation to go down that path. To be fully transparent with people outside of Red Hat; there are currently 3 extra-layers on top of vanilla kolla images so we can use them downstream. I'm part of the folks who actually maintain them upstream and downstream and I'm tired of solving the problem multiple times at different levels. Our proposal is going to make it extremely simple: one YAML file per image with no downstream version of it. No extra overrides; no complications. One file, consumed upstream, downstream everywhere. As for customers/partners/third party; it'll be very easy to create their own images. The new interface is basically the Dockerfile one; and we'll make sure this is well documented with proper examples (e.g neutron drivers, etc). We have a strong desire to collaborate with the community; for example there is potential work to do on the container images gating with Zuul and such. However, based on our usage of Kolla, I believe that it is time to go. On Fri, May 1, 2020 at 7:27 PM Kevin Carter <kecarter@redhat.com> wrote:
Kevin Carter IRC: kecarter
On Fri, May 1, 2020 at 5:57 PM Alex Schultz <aschultz@redhat.com> wrote:
On Fri, May 1, 2020 at 4:12 PM Sean Mooney <smooney@redhat.com> wrote:
On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote:
Hello Stackers,
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea
build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for
is to the
TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area.
To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7].
A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 kolla could also use the ubi8 image as a base you could select centos as the base image type and then pass the url to the ubi image and that should would work.
ubi8 is smaller, but it doesn't account for it all. We're likely not importing some other deps that make it into the normal base that perhaps we don't need. I'd still like to see rpm diffs to better understand if this savings is real.
+1 I think it would be a good exercise to produce an RPM diff for a set of images. Maybe just the ones we've already ported?
- nova-base: + Kolla: 1.09 GB - new: 720 MB unless you are using layers fo rthe new iamge keep in mind you have to subtract the size of the base imag form the nova base image to caluate how big it actully is so its acully only usein about 500MB
if you are using layser for the ubi nova-base then these iamge are actully the same size the diffenrec is entirly coming for the reduction in the base image
- nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB again here you have to do the same arithmatich so 2.14-1.09 so this image is adding 1.05 GB of layers in the kolla case and the ubi version is adding 1.2GB so the ubi image is acully using more space assumign tis using layer if tis not then its 1.05GB vs 1.9GB and the kolla image still comes out better by an even larger margin - keystone: + Kolla: 973 MB - new: 532 MB
again here this si all in the deleta of the base image
- memcached: + Kolla: 633 MB - new: 379 MB
as is this
so over all i think the ubi based iamges are using more or the same space then the kolla ones. they just have a smaller base image. so rather then doing this i think it makes more sense to just use the ubi iamge with the kolla build system unless you expect be be able to signicicalty reduce the size of the images more.
based on size alone i tone se anny really benifit so far
While the links shown are many, the actual volume of the proposed
change is
small, although the impact is massive:
* With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. given customer likely have built up ther own template override files unless you also have a way to support that this is a breaking change to there workflow. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. again when you take into the account that kolla uses layers the image dont appear to be smaller and the nova-libvert image is actully bigger kolla gets large space saving buy using the copy on write layers that make up docker images to share common code so you can just look a the size fo indivigual iamages you have look at the size of the total set and compare those.
We're looking to chart a more reliable path forward, and making the
user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out.
We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned. honestly im not sure this is a benifty or something that should be done but it seam like ye have already decieded on a
TripleO path. not the kolla image can also be made smaller then they currently are using multi state builds to remove and deps installed for buidl requirement or reducing the optional depencies.
many for the deps currently installed are to supprot the different vender backedns. if we improve the bindep files in each project too group those depencies by the optional backedn kolla could easily be updated to use bindep for package installation. and the image would get even smaller.
The issues that we (tripleo) have primarily run into are the expectations around versions (rabbitmq being the latest) and being able to deploy via source. Honestly if tripleo was going to support installations running source or alternative distros (which neither we currently plan to do), it would likely make sense to continue down the Kolla path. However we already end up doing so many overrides to the standard Kolla container configurations[0] that I'm not sure it really makes sense to continue. Additionally since we no longer support building via docker, we're basically using kolla as a glorified template engine to give us Dockerfiles. The proposal is to stop using Kolla as a glorified templating engine and actually just manage one that fits our needs. We're using a very specific path through the Kolla code and I'm uncertain if it's beneficial for either of us anymore. This also frees up kolla + kolla-ansible improve their integration and likely be able to make some of the tougher choices that they've brought up in the other email about the future of kolla.
Personally I see this move as freeing up Kolla to be able to innovate and TripleO being able to simplify. As one of the few people who know how the container sausage is made in the TripleO world, I think it's likely for the best.
[0] https://opendev.org/openstack/tripleo-common/src/branch/master/container-ima...
Thanks,
Kevin and Emilien
[0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8]
https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcd...
Kevin Carter IRC: kecarter
-- Emilien Macchi
On Fri, May 1, 2020 at 5:06 PM Sean Mooney <smooney@redhat.com> wrote:
Hello Stackers,
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea is to build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for the TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area.
To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7].
A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 kolla could also use the ubi8 image as a base you could select centos as
- nova-base: + Kolla: 1.09 GB - new: 720 MB unless you are using layers fo rthe new iamge keep in mind you have to subtract the size of the base imag form the nova
On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: the base image type and then pass the url to the ubi image and that should would work. base image to caluate how big it actully is so its acully only usein about 500MB
if you are using layser for the ubi nova-base then these iamge are actully the same size the diffenrec is entirly coming for the reduction in the base image
- nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB again here you have to do the same arithmatich so 2.14-1.09 so this image is adding 1.05 GB of layers in the kolla case and the ubi version is adding 1.2GB so the ubi image is acully using more space assumign tis using layer if tis not then its 1.05GB vs 1.9GB and the kolla image still comes out better by an even larger margin - keystone: + Kolla: 973 MB - new: 532 MB
again here this si all in the deleta of the base image
- memcached: + Kolla: 633 MB - new: 379 MB
as is this
You are correct that the size of each application is smaller due to the layers at play, this holds true for both Kolla and the new image build process we're using for comparison. The figures here are the total size as reported by something like a `(podman || docker) image list`. While the benefits of a COW'ing file system are not represented in these figures, rest assured we've made good use of layering techniques to ensure optimal image sizes.
so over all i think the ubi based iamges are using more or the same space then the kolla ones. they just have a smaller base image. so rather then doing this i think it makes more sense to just use the ubi iamge with the kolla build system unless you expect be be able to signicicalty reduce the size of the images more.
based on size alone i tone se anny really benifit so far
While the links shown are many, the actual volume of the proposed change
is
small, although the impact is massive:
* With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. given customer likely have built up ther own template override files unless you also have a way to support that this is a breaking change to there workflow. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. again when you take into the account that kolla uses layers the image dont appear to be smaller and the nova-libvert image is actully bigger kolla gets large space saving buy using the copy on write layers that make up docker images to share common code so you can just look a the size fo indivigual iamages you have look at the size of the total set and compare those.
So far we've not encountered a single image using Kolla that is smaller and we've tested both CentOS8 and UBI8 as a starting point. In all cases we've been able to produce smaller containers to the tune of hundreds of megabytes saved per-application without feature gaps (as it pertains to TripleO), granted, the testing has not been exhaustive. The noted UBI8 starting point was chosen because it produced the greatest savings.
We're looking to chart a more reliable path forward, and making the
user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out.
We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned. honestly im not sure this is a benifty or something that should be done but it seam like ye have already decieded on a
TripleO path. not the kolla image can also be made smaller then they currently are using multi state builds to remove and deps installed for buidl requirement or reducing the optional depencies.
many for the deps currently installed are to supprot the different vender backedns. if we improve the bindep files in each project too group those depencies by the optional backedn kolla could easily be updated to use bindep for package installation. and the image would get even smaller.
Thanks,
Kevin and Emilien
[0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8]
https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcd...
Kevin Carter IRC: kecarter
W dniu 02.05.2020 o 01:18, Kevin Carter pisze:
So far we've not encountered a single image using Kolla that is smaller and we've tested both CentOS8 and UBI8 as a starting point. In all cases we've been able to produce smaller containers to the tune of hundreds of megabytes saved per-application without feature gaps (as it pertains to TripleO), granted, the testing has not been exhaustive. The noted UBI8 starting point was chosen because it produced the greatest savings.
Can you share what was tested? I wonder how many of those can be applied to Kolla to get some savings here and there. registry.access.redhat.com/ubi8/ubi latest 8e0b0194b7e1 9 days ago 204MB centos 8 470671670cac 3 months ago 237MB 33 MB difference of base image does not look much.
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services? https://review.opendev.org/720107 Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.) Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
On Sat, 2020-05-02 at 13:37 +0000, Jeremy Stanley wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain
[...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
assumign that goes ageand then i guess that would be one path forward to avoid yet another set of openstack images as i share your concern although i dont think the technical case has been made that ooo should replace the kolla images with a new set or that the proposed cross project goal should be accpeted.
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
Either way, if they can both make use of the same speculative
by the way waht do you define as speculative image building? if its just building contier image for git repos prepared by zuul then kolla source image could trivialy support that. kolla supports building images for git repos so you can just override the source location for each image to point to the zull cloned git repos. i have not really followed why https://review.opendev.org/720107 is somehow unique in being able to support that? obviously without a way to rebuild distroy packages we build kolla binary images speculativly. but kolla source images which just pip install the different service into a virtual env could totally support depens-on and other featuer we get for free in the devstack jobs by virtue of using the soruce repos prepared by zuul.
container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well).
On 2020-05-03 02:31:41 +0100 (+0100), Sean Mooney wrote: [...]
i have not really followed why https://review.opendev.org/720107 is somehow unique in being able to support that? obviously without a way to rebuild distroy packages we build kolla binary images speculativly. but kolla source images which just pip install the different service into a virtual env could totally support depens-on and other featuer we get for free in the devstack jobs by virtue of using the soruce repos prepared by zuul. [...]
"Build speculatively" in this case meaning the jobs are designed to be able to use depends-on to test with layers built from changes in other projects which also haven't merged yet, incorporating changes to images which are queued ahead of them in the gate pipeline, and so on. That is, jobs can make use of OpenDev's buildset and intermediate registry proxies to pull images from other jobs in the same buildset or in (implicit or explicit) dependencies rather than only using images built in the job itself or temporarily uploading them somewhere public like tarballs.o.o, dockerhub, et cetera. https://docs.opendev.org/opendev/base-jobs/latest/docker-image.html -- Jeremy Stanley
On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
https://review.opendev.org/720107
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need. I believe the issue is that the overall process to go from zero to an application in the container is something like the following: 1) input image (centos/ubi0/ubuntu/clear/whatever) 2) Packaging method for the application (source/rpm/dpkg/magic) 3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) 4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc) 5) How configurations are provided to the application (at run time or at build) 6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc) 7) Container build method (docker/buildah/other) The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container. IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information. I applaud the desire to try and unify all the things, but as we've seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer. Thanks, -Alex
Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
Interesting discussion going on in this thread!
From a mostly operator viewpoint (and CentOS-only shop) we've closely been monitoring what way Red Hat's enterprise based products has been taking and been planing for a while to explore the Kolla-based containers route due to the backing of Red Hat's usage.
Now based on TripleO's usecase it's understandable to make their tooling easier but I think this also starts rooting into the OpenStack official containers and "make OpenStack more application like" that Mohammed Naser was digging into with his TC application. Operating an OpenStack deployment today is hard, but much better and enterprise than before, upgrading OpenStack now is a dream (jumped Rocky -> Train just some weeks ago) compared to before but my opinion is that we are missing a lot of view on OpenStack as an application today. It would be sad to see Red Hat's involvement in Kolla scale down. Just my 2c which probably was mostly offtopic from TripleO, my apologies for that. Best regards Tobias ________________________________________ From: Alex Schultz <aschultz@redhat.com> Sent: Sunday, May 3, 2020 9:26 PM To: Jeremy Stanley Cc: OpenStack Discuss Subject: Re: [tripleo] Container image tooling roadmap On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
https://review.opendev.org/720107
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need. I believe the issue is that the overall process to go from zero to an application in the container is something like the following: 1) input image (centos/ubi0/ubuntu/clear/whatever) 2) Packaging method for the application (source/rpm/dpkg/magic) 3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) 4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc) 5) How configurations are provided to the application (at run time or at build) 6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc) 7) Container build method (docker/buildah/other) The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container. IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information. I applaud the desire to try and unify all the things, but as we've seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer. Thanks, -Alex
Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
On Sun, May 3, 2020 at 2:35 PM Tobias Urdin <tobias.urdin@binero.com> wrote:
Interesting discussion going on in this thread!
From a mostly operator viewpoint (and CentOS-only shop) we've closely been monitoring what way Red Hat's enterprise based products has been taking and been planing for a while to explore the Kolla-based containers route due to the backing of Red Hat's usage.
Now based on TripleO's usecase it's understandable to make their tooling easier but I think this also starts rooting into the OpenStack official containers and "make OpenStack more application like" that Mohammed Naser was digging into with his TC application.
The crux of all of this is what is the official base and software packaging/distribution method. "Official Images" are nice but likely can't be consumed by a number of folks due to the previous two items. Whenever folks are talking containers they should s/container/package/ and ask the same questions. Providing "Official Images" seems to go against the original intent of letting folks build what they need as they need it. Having lived in the deployment of openstack for 5 years now, I can tell you just having a thing doesn't solve anything when it comes to OpenStack. Not really certain what the intent of this effort is. I guess I should go read up on it.
Operating an OpenStack deployment today is hard, but much better and enterprise than before, upgrading OpenStack now is a dream (jumped Rocky -> Train just some weeks ago) compared to before but my opinion is that we are missing a lot of view on OpenStack as an application today.
It would be sad to see Red Hat's involvement in Kolla scale down. Just my 2c which probably was mostly offtopic from TripleO, my apologies for that.
There hasn't really been much involvement in Kolla in some time. It was mostly static unless we found a bug or need to add some new packages/containers. Dependencies get pulled in mostly by way of the rpms that are installed so with the exception of building, there isn't much to do anymore. We did provide some python3 conversion bits as we hit them but I felt like it was more of just having to find all the places where the package names needed to be changed and adding logic. There isn't anything wrong with the Kolla images themselves and it's more of being able to rebuild containers and their consumption that seems to be something that isn't exactly solved by any of the items being discussed. I've always been a proponent of building blocks and letting folks put them together to make something that fits their needs. The current discussions around containers doesn't seem to be aligned with this. We're currently investigating how we can create building blocks that can be consumed to result in containers. 1) container file generation, 2) building 3) distribution. The first item is a global problem and is really the main thing that people will continue to struggle with as it depends on what you're packaging together. Be it UBI8+RPMS, Ubuntu+debian packaging, Ubuntu+cloud dpkgs, Clear LInux+source, etc. That all gets defined in the container file. The rest is building from that and distributing the output. Kolla today does all three things and allows for any of the base container + packaging methods. Since we (tripleo) need these 3 items to remain distinct blocks for various reasons, we would like to seem them remain independent but that seems to go against what anyone else wants.
Best regards Tobias ________________________________________ From: Alex Schultz <aschultz@redhat.com> Sent: Sunday, May 3, 2020 9:26 PM To: Jeremy Stanley Cc: OpenStack Discuss Subject: Re: [tripleo] Container image tooling roadmap
On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
https://review.opendev.org/720107
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need.
I believe the issue is that the overall process to go from zero to an application in the container is something like the following:
1) input image (centos/ubi0/ubuntu/clear/whatever) 2) Packaging method for the application (source/rpm/dpkg/magic) 3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) 4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc) 5) How configurations are provided to the application (at run time or at build) 6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc) 7) Container build method (docker/buildah/other)
The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container.
IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information.
I applaud the desire to try and unify all the things, but as we've seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer.
Thanks, -Alex
Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
On 2020-05-03 15:32:50 -0600 (-0600), Alex Schultz wrote: [...]
I've always been a proponent of building blocks and letting folks put them together to make something that fits their needs. The current discussions around containers doesn't seem to be aligned with this. We're currently investigating how we can create building blocks that can be consumed to result in containers. 1) container file generation, 2) building 3) distribution. The first item is a global problem and is really the main thing that people will continue to struggle with as it depends on what you're packaging together. Be it UBI8+RPMS, Ubuntu+debian packaging, Ubuntu+cloud dpkgs, Clear LInux+source, etc. That all gets defined in the container file. The rest is building from that and distributing the output. Kolla today does all three things and allows for any of the base container + packaging methods. Since we (tripleo) need these 3 items to remain distinct blocks for various reasons, we would like to seem them remain independent but that seems to go against what anyone else wants. [...]
My understanding of Mohammed's proposal is that he wants to create basic building blocks out of Docker container images which can be reused in a variety of contexts. Opinionated on the underlying operating system and installation method, it seems like the suggestion is to have something akin to Python sdist or wheel packages, but consumable from DockerHub using container-oriented tooling instead of from PyPI using pip. The images would in theory be built in a templated and consistent fashion taking cues from the Python packaging metadata, similar to Monty's earlier PBRX experiment maybe. -- Jeremy Stanley
Hi all, Sorry to get into, but I have chosen to use TripleO with CentOS (around 10clouds with 16 racks), and several production clouds as RHOSP (3). Not sure, how using ubi8 impacts CentOS8 and future releases... Also, I loved the concept, that Emilien and Alex mention. Quote by Emilien: "Our proposal is going to make it extremely simple: one YAML file per image with no downstream version of it. No extra overrides; no complications. One file, consumed upstream, downstream everywhere. As for customers/partners/third party; it'll be very easy to create their own images. The new interface is basically the Dockerfile one; and we'll make sure this is well documented with proper examples (e.g neutron drivers, etc)." Also, Alex idea, as I understood: "User select whichever image base user wants (OpenBSD + pip install of OSP components of maybe even exact build and/or tag from github or even local copy?!) in one simple build file. And generates those images in users own environment, and places them in \"undercloud\" or local docker/podman repo." That would be perfect for me, as my second step, once I have all setup deviations in place I might like/need to apply some additional tools into containers, also, some light infra modifications like logging... Maybe TripleO already has this covered, will need to dig into it. Sorry for making you read all of this (if someone did), as I could only help by doing deployments and run some tests (but running tests is OSP part, not TripleO) with high throughput (around 17Mpps with small packets [Mega Packets Per Second]) and running instances with 2000 IP addresses on a single port, as the app is too old to rewrite it, but it is still in use and will be :) and, I believe, that even some of you use it indirectly even now :) Thank you Have a nice daytime and keep a smile on your face!
On Sun, 2020-05-03 at 21:42 +0000, Jeremy Stanley wrote:
On 2020-05-03 15:32:50 -0600 (-0600), Alex Schultz wrote: [...]
I've always been a proponent of building blocks and letting folks put them together to make something that fits their needs. The current discussions around containers doesn't seem to be aligned with this. We're currently investigating how we can create building blocks that can be consumed to result in containers. 1) container file generation, 2) building 3) distribution. The first item is a global problem and is really the main thing that people will continue to struggle with as it depends on what you're packaging together. Be it UBI8+RPMS, Ubuntu+debian packaging, Ubuntu+cloud dpkgs, Clear LInux+source, etc. That all gets defined in the container file. The rest is building from that and distributing the output. Kolla today does all three things and allows for any of the base container + packaging methods. Since we (tripleo) need these 3 items to remain distinct blocks for various reasons, we would like to seem them remain independent but that seems to go against what anyone else wants.
[...]
My understanding of Mohammed's proposal is that he wants to create basic building blocks out of Docker container images which can be reused in a variety of contexts
. Opinionated on the underlying operating system and installation method, it seems like the suggestion is to have something akin to Python sdist or wheel packages, but consumable from DockerHub using container-oriented tooling instead of from PyPI using pip. The images would in theory be built in a templated and consistent fashion taking cues from the Python packaging metadata, similar to Monty's earlier PBRX experiment maybe. well that is kind of what kolla was trying to do in a way.
those images will not be usable by a downstream product due to the enforced choice of disto and install method and the fact that it include packages that are not supported or exculde packages that are. so they wont be reusable as a building block for ooo (since the proposed iamges are deb based_ or RHOSP( since they are not build with the Redhat rpms on rhel with our downstream backports). the docker files also wont be reusable unless then are configurable at which point you are back to kolla again or each vendor will need to use there own solution. as a side note i am not sold on Mohammed's proposal being the right path forward vs just creating another project for that goal. for example if the intent of using the python slim image as a base is to have a small base iamge that facilates deps being installed via pip i would suggest that we should be looking at python:alpine instead not the debian buster slim image. provide a set of templates to build images in a consistent fashion. i think we could get even more uniformatiy in the kolla images by relying more on bindeps and have in a set of bindep labels per image to control what gets installed.
On Sun, May 03, 2020 at 20:30 Tobias Urdin wrote:
Interesting discussion going on in this thread!
From a mostly operator viewpoint (and CentOS-only shop) we've closely been monitoring what way Red Hat's enterprise based products has been taking and been planing for a while to explore the Kolla-based containers route due to the backing of Red Hat's usage.
Now based on TripleO's usecase it's understandable to make their tooling easier but I think this also starts rooting into the OpenStack official containers and "make OpenStack more application like" that Mohammed Naser was digging into with his TC application.
Operating an OpenStack deployment today is hard, but much better and enterprise than before, upgrading OpenStack now is a dream (jumped Rocky -> Train just some weeks ago) compared to before but my opinion is that we are missing a lot of view on OpenStack as an application today.
It would be sad to see Red Hat's involvement in Kolla scale down. Just my 2c which probably was mostly offtopic from TripleO, my apologies for that.
Best regards Tobias ________________________________________ From: Alex Schultz <aschultz@redhat.com> Sent: Sunday, May 3, 2020 9:26 PM To: Jeremy Stanley Cc: OpenStack Discuss Subject: Re: [tripleo] Container image tooling roadmap
On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
https://review.opendev.org/720107
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need.
I believe the issue is that the overall process to go from zero to an application in the container is something like the following:
1) input image (centos/ubi0/ubuntu/clear/whatever) 2) Packaging method for the application (source/rpm/dpkg/magic) 3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) 4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc) 5) How configurations are provided to the application (at run time or at build) 6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc) 7) Container build method (docker/buildah/other)
The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container.
IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information.
I applaud the desire to try and unify all the things, but as we've seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer.
Thanks, -Alex
Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
Is it defined somewhere how to udpate the images? In particular, is the intermediary layer update taken into account, or is it expected that everything is rebuilt from scratch to get any updates? I can't find much information about that in this proposal or in the tc container-images goal, and that seems like a non trivial process. Could this be a good oportunity for collaboration? Regards, -Tristan
On 2020-05-03 23:07:56 +0000 (+0000), Tristan Cacqueray wrote: [...]
Is it defined somewhere how to udpate the images?
Images are built when their respective source repositories change. I gather there is work in progress in Zuul to be able to trigger new builds of images when related repositories change as well, though I don't recall the exact details.
In particular, is the intermediary layer update taken into account, or is it expected that everything is rebuilt from scratch to get any updates?
I won't claim to be an expert in these matters, but my understanding is that Zuul's dependency mechanisms guarantee that an image won't be promoted unless the images it depends on are also promoted.
I can't find much information about that in this proposal or in the tc container-images goal, and that seems like a non trivial process. Could this be a good oportunity for collaboration?
Absolutely! This is why we're trying to get more projects to try to make use of the workflow, so we have opportunities to improve on it. -- Jeremy Stanley
W dniu 03.05.2020 o 22:30, Tobias Urdin pisze:
It would be sad to see Red Hat's involvement in Kolla scale down. Just my 2c which probably was mostly offtopic from TripleO, my apologies for that.
I am on of Kolla core devs. Under my Linaro hat there is also Red Hat one ;D In Kolla I do mostly AArch64 stuff, keep Debian support alive and fight CI fires. That's because I am a member of Red Hat ARM team assigned as an engineer to Linaro project.
Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling: On 03.05.2020 21:26, Alex Schultz wrote:
On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
https://review.opendev.org/720107
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need.
I believe the issue is that the overall process to go from zero to an application in the container is something like the following:
1) input image (centos/ubi0/ubuntu/clear/whatever)
* support ubi8 base images
2) Packaging method for the application (source/rpm/dpkg/magic)
* abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps
3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom)
* abstract away all the dependencies (atm I can only think of go.mod & Go's vendor packages example, sorry) to some extra DSL & CLI tooling may be
4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc)
* is already fully covered above, I suppose
5) How configurations are provided to the application (at run time or at build)
(what is missing for the run time, almost a perfection yet?) * for the build time, is already fully covered above, i.e. extra DSL & CLI tooling (my biased example: go mod tidy?)
6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc)
* have a better DSL to abstract away all the container runtime & orchestration details beneath
7) Container build method (docker/buildah/other)
* support for buildah (more fancy abstractions and DSL extenstions ofc!)
The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of
* have better modularity: offload some of the "combinations" to interested 3rd party maintainers (split repos into pluggable modules) and their own CI/CD.
these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should
NOTE: there is also kolla startup/config APIs on which TripleO will *have to* rely for the next 3-5 years or so. Its compatibility shall not be violated.
be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container.
* and again, have better pluggability to abstract away all the downstream vs upstream specifics (btw, I'm not bought on the new custom tooling can solve this problem in a different way but still using better/simpler DSL & tooling)
IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information.
I applaud the desire to try and unify all the things, but as we've
So the final call: have pluggable and modular design, adjust DSL and tooling to meet those goals for Kolla. So that one who doesn't chase for unification, just sets up his own module and plugs it into build pipeline. Hint: that "new simpler tooling for TripleO" may be that pluggable module!
seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer.
Thanks, -Alex
Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
-- Best regards, Bogdan Dobrelya, Irc #bogdando
Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling:
On 03.05.2020 21:26, Alex Schultz wrote:
On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain
[...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
https://review.opendev.org/720107
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need.
I believe the issue is that the overall process to go from zero to an application in the container is something like the following:
1) input image (centos/ubi0/ubuntu/clear/whatever)
* support ubi8 base images
2) Packaging method for the application (source/rpm/dpkg/magic)
* abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps im not sure a custome dsl is the anser to any of the issues.
3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom)
* abstract away all the dependencies (atm I can only think of go.mod & Go's vendor packages example, sorry) to some extra DSL & CLI tooling may be we have bindeps which is ment to track all binary depenices in a multi disto way. so that is the solution to package depencies
4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc)
* is already fully covered above, I suppose
5) How configurations are provided to the application (at run time or at build)
(what is missing for the run time, almost a perfection yet?) * for the build time, is already fully covered above, i.e. extra DSL & CLI tooling (my biased example: go mod tidy?) kolla does it all outside the container si this is a non issue really. its done at runtime by design wich is the most flexible desing and the the correct one IMO
6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc)
* have a better DSL to abstract away all the container runtime & orchestration details beneath
On Tue, 2020-05-05 at 15:56 +0200, Bogdan Dobrelya wrote: this has nothing to do with kolla. this is a consern for ooo or kolla ansible but kolla just proivdes image it does not execute them. a dsl is not useful to adress this.
7) Container build method (docker/buildah/other)
* support for buildah (more fancy abstractions and DSL extenstions ofc!)
buildah support the docker file format as far as i am aware so no we dont need a dsl againg i stongly think that is the wrong approch. we just need to ad a config option to the kolla-build binary and add a second module that will invoke buildah instead of docker. so we would just have to modify https://github.com/openstack/kolla/blob/master/kolla/image/build.py we shoudl not need to modify the docker file templates to support this in any way. if kolla ansible wanted to also support podman it would jsut need to reimplment https://github.com/openstack/kolla-ansible/blob/master/ansible/library/kolla... to provide the same interface but invoke podman. simiarly you caoudl add a config option to select which module to invoke in a task+
The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of
* have better modularity: offload some of the "combinations" to interested 3rd party maintainers (split repos into pluggable modules) and their own CI/CD.
we had talked about having a kolla-infra repo for non openstack service in the past but the effort was deamed not worth it. so non openstack continer like rabbit mariadb and openvswitch could be split out or we could use comunity provided contaienr but im not sure this is needed. the source vs binary disticion only appplies ot openstack services it does not apply to infra contianers. regardless of the name those are always built using disto binary packages.
these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should
NOTE: there is also kolla startup/config APIs on which TripleO will *have to* rely for the next 3-5 years or so. Its compatibility shall not be violated.
be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container.
* and again, have better pluggability to abstract away all the downstream vs upstream specifics (btw, I'm not bought on the new custom tooling can solve this problem in a different way but still using better/simpler DSL & tooling)
plugablity wont help. the downstream issue is just that we need to build the contianer using downstream repos which have patches and in some case we want to add or remove depencices based on what is supported in the product. if we use bindeps correctly. we can achive this witout needing to add plugabilty and the complexity that would invovle. if we simpley have a list of bindep lables to install per image and then update all bindep files in the complent repos to have lable per backend we could use a single template with defaults labes that when building downstream could simple be overriden to use the labels we support. for example if nova had say libvirt,vmware,xen,ceph we could install all of them by default installing the bindeps for ceph libvirt vmware and ceph. downstream we coudl jsut enable libvirt and ceph since we dont support xen or vmware you woudl do this via the build config file with a list of lables per image. in the docker file you would just loop over the lables doing "bindep <lable> | <package mager> install ..." to contorl what got installed. that could be abstracted behind a macro fairly simply by either extending the existing source config opts with a labels section https://github.com/openstack/kolla/blob/master/kolla/common/config.py#L288-L... or creating a similar one. we woud need to create a comuntiy goal to have all service adopt and use bindeps to discribe the deps for the different backends they support but that would be a good goal apart for this discussio
IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information.
I applaud the desire to try and unify all the things, but as we've
So the final call: have pluggable and modular design, adjust DSL and tooling to meet those goals for Kolla. So that one who doesn't chase for unification, just sets up his own module and plugs it into build pipeline. Hint: that "new simpler tooling for TripleO" may be that pluggable module!
i dont think this is the right direction but that said im not going to be working on ooo or kolla in either case to implement my alternitive. modeularity and plugablity is not the aswer in this case in my view. unifying and simplifying build system so that it can be used downstream with no overrides and minimal configuration cannot be achive by plugins and modules.
seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer.
Thanks, -Alex
Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
On 05.05.2020 16:45, Sean Mooney wrote:
Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling:
On 03.05.2020 21:26, Alex Schultz wrote:
On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain
[...]
Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services?
https://review.opendev.org/720107
Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.)
I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need.
I believe the issue is that the overall process to go from zero to an application in the container is something like the following:
1) input image (centos/ubi0/ubuntu/clear/whatever)
* support ubi8 base images
2) Packaging method for the application (source/rpm/dpkg/magic)
* abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps im not sure a custome dsl is the anser to any of the issues.
3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom)
* abstract away all the dependencies (atm I can only think of go.mod & Go's vendor packages example, sorry) to some extra DSL & CLI tooling may be we have bindeps which is ment to track all binary depenices in a multi disto way. so that is the solution to package depencies
4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc)
* is already fully covered above, I suppose
5) How configurations are provided to the application (at run time or at build)
(what is missing for the run time, almost a perfection yet?) * for the build time, is already fully covered above, i.e. extra DSL & CLI tooling (my biased example: go mod tidy?) kolla does it all outside the container si this is a non issue really. its done at runtime by design wich is the most flexible desing and the the correct one IMO
6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc)
* have a better DSL to abstract away all the container runtime & orchestration details beneath
On Tue, 2020-05-05 at 15:56 +0200, Bogdan Dobrelya wrote: this has nothing to do with kolla. this is a consern for ooo or kolla ansible but kolla just proivdes image it does not execute them. a dsl is not useful to adress this.
7) Container build method (docker/buildah/other)
* support for buildah (more fancy abstractions and DSL extenstions ofc!)
buildah support the docker file format as far as i am aware so no we dont need a dsl againg i stongly think that is the wrong approch. we just need to ad a config option to the kolla-build binary and add a second module that will invoke buildah instead of docker. so we would just have to modify https://github.com/openstack/kolla/blob/master/kolla/image/build.py we shoudl not need to modify the docker file templates to support this in any way.
if kolla ansible wanted to also support podman it would jsut need to reimplment https://github.com/openstack/kolla-ansible/blob/master/ansible/library/kolla... to provide the same interface but invoke podman. simiarly you caoudl add a config option to select which module to invoke in a task+
The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of
* have better modularity: offload some of the "combinations" to interested 3rd party maintainers (split repos into pluggable modules) and their own CI/CD.
we had talked about having a kolla-infra repo for non openstack service in the past but the effort was deamed not worth it. so non openstack continer like rabbit mariadb and openvswitch could be split out or we could use comunity provided contaienr but im not sure this is needed.
the source vs binary disticion only appplies ot openstack services it does not apply to infra contianers. regardless of the name those are always built using disto binary packages.
these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should
NOTE: there is also kolla startup/config APIs on which TripleO will *have to* rely for the next 3-5 years or so. Its compatibility shall not be violated.
be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container.
* and again, have better pluggability to abstract away all the downstream vs upstream specifics (btw, I'm not bought on the new custom tooling can solve this problem in a different way but still using better/simpler DSL & tooling)
plugablity wont help. the downstream issue is just that we need to build the contianer using downstream repos which have patches and in some case we want to add or remove depencices based on what is supported in the product.
if we use bindeps correctly. we can achive this witout needing to add plugabilty and the complexity that would invovle.
if we simpley have a list of bindep lables to install per image and then update all bindep files in the complent repos to have lable per backend we could use a single template with defaults labes that when building downstream could simple be overriden to use the labels we support.
for example if nova had say
libvirt,vmware,xen,ceph we could install all of them by default installing the bindeps for ceph libvirt vmware and ceph. downstream we coudl jsut enable libvirt and ceph since we dont support xen or vmware
you woudl do this via the build config file with a list of lables per image.
in the docker file you would just loop over the lables doing "bindep <lable> | <package mager> install ..." to contorl what got installed.
that could be abstracted behind a macro fairly simply by either extending the existing
source config opts with a labels section https://github.com/openstack/kolla/blob/master/kolla/common/config.py#L288-L... or creating a similar one.
we woud need to create a comuntiy goal to have all service adopt and use bindeps to discribe the deps for the different backends they support but that would be a good goal apart for this discussio
IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information.
I applaud the desire to try and unify all the things, but as we've
So the final call: have pluggable and modular design, adjust DSL and tooling to meet those goals for Kolla. So that one who doesn't chase for unification, just sets up his own module and plugs it into build pipeline. Hint: that "new simpler tooling for TripleO" may be that pluggable module!
i dont think this is the right direction but that said im not going to be working on ooo or kolla in either case to implement my alternitive. modeularity and plugablity is not the aswer in this case in my view. unifying and simplifying build system so that it can be used downstream with no overrides and minimal configuration cannot be achive by plugins and modules.
That I meant is hiding downstream vs upstream differences into configurable parameters that fit into some (probably versioned) schema. And having those YAML files, for example, sitting in a 3rd side repo with custom build image CI jobs set. Wouldn't that help at all?.. Anyway, my intention was only give a few examples and naive suggestions to illustrate the idea. I wasn't aiming to sound right and win all prizes with a 1st hit. But iterate collaboratively to clarify the real problem scope and possible alternatives, at least for the subject spec in TripleO, at most for Kolla roadmap as well.
seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer.
Thanks, -Alex
Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley
-- Best regards, Bogdan Dobrelya, Irc #bogdando
W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze:
Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling:
1) input image (centos/ubi0/ubuntu/clear/whatever)
* support ubi8 base images
"kolla-build --base-image ubi8" has you covered. Or you can provide a patch which will switch to ubi8 for some of existing targets. Easy, really.
2) Packaging method for the application (source/rpm/dpkg/magic)
* abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps
What is DSL? Digital Subscriber Line? Did Something Likeable? Droids Supporting Legacy? I decided to not follow rest of discussion. Let you guys invent something interesting and working. I can just follow then.
On Tue, 2020-05-05 at 16:57 +0200, Marcin Juszkiewicz wrote:
W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze:
Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling:
1) input image (centos/ubi0/ubuntu/clear/whatever)
* support ubi8 base images
"kolla-build --base-image ubi8" has you covered. Or you can provide a patch which will switch to ubi8 for some of existing targets. Easy, really.
2) Packaging method for the application (source/rpm/dpkg/magic)
* abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps
What is DSL? Digital Subscriber Line? Did Something Likeable? Droids Supporting Legacy? domain specific language
there was a propsoal a few years ago before we started to use macros and other features in jinja to create a DSL for kolla uitlising jinja which is a DSL iteslf was seen as less of a learning curve then creating a custom dsl for kolla. the docker file format is also a DSL.
I decided to not follow rest of discussion. Let you guys invent something interesting and working. I can just follow then.
On Tue, May 5, 2020 at 9:12 AM Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> wrote:
W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze:
Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling:
1) input image (centos/ubi0/ubuntu/clear/whatever)
* support ubi8 base images
"kolla-build --base-image ubi8" has you covered. Or you can provide a patch which will switch to ubi8 for some of existing targets. Easy, really.
Yea my list wasn't saying kolla had deficiencies for anything, but rather the core concepts that are needed for something like this. Infact kolla does tick the boxes for all of these however may be opinionated in some areas (e.g. building has to use docker) which may not make sense for others to consume. It might also not be in the best interest of the project to actually push support for alternative solutions if there isn't a larger demand from the community.
2) Packaging method for the application (source/rpm/dpkg/magic)
* abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps
What is DSL? Digital Subscriber Line? Did Something Likeable? Droids Supporting Legacy?
domain-specific language. The proposal kinda includes something to that effect that lets us define a yaml with a specific structure which gets turned into a dockerfile equivalent.
I decided to not follow rest of discussion. Let you guys invent something interesting and working. I can just follow then.
Yea I feel like we're going in circles now. Feel free to follow the spec and if it makes sense to contribute it elsewhere or move it we can discuss that later. Right now we're working on something that we think addresses our specific needs without having to re-write significant portions of other projects and impacting everyone. It may not make sense for everyone, but we're investigating it for V. Thanks, -Alex
On Tue, 5 May 2020 at 16:24, Alex Schultz <aschultz@redhat.com> wrote:
On Tue, May 5, 2020 at 9:12 AM Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> wrote:
W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze:
Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling:
1) input image (centos/ubi0/ubuntu/clear/whatever)
* support ubi8 base images
"kolla-build --base-image ubi8" has you covered. Or you can provide a patch which will switch to ubi8 for some of existing targets. Easy, really.
Yea my list wasn't saying kolla had deficiencies for anything, but rather the core concepts that are needed for something like this. Infact kolla does tick the boxes for all of these however may be opinionated in some areas (e.g. building has to use docker) which may not make sense for others to consume. It might also not be in the best interest of the project to actually push support for alternative solutions if there isn't a larger demand from the community.
Support for buildah (and podman in kolla-ansible) comes up as a request at most design sessions. It wouldn't be too hard to add, and wouldn't see resistance from me at least.
2) Packaging method for the application (source/rpm/dpkg/magic)
* abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps
What is DSL? Digital Subscriber Line? Did Something Likeable? Droids Supporting Legacy?
domain-specific language. The proposal kinda includes something to that effect that lets us define a yaml with a specific structure which gets turned into a dockerfile equivalent.
I decided to not follow rest of discussion. Let you guys invent something interesting and working. I can just follow then.
Yea I feel like we're going in circles now. Feel free to follow the spec and if it makes sense to contribute it elsewhere or move it we can discuss that later. Right now we're working on something that we think addresses our specific needs without having to re-write significant portions of other projects and impacting everyone. It may not make sense for everyone, but we're investigating it for V.
Clearly a large rewrite of kolla would be likely to get some pushback, but I expect there are a number of places where Tripleo has worked around kolla rather than with it. The idea that Tripleo requirements should not impact kolla is wrong - it is one of two main consumers (the other being kolla-ansible), and I would like to think we would accommodate your requirements where possible, if resources are provided to implement the changes.
Thanks, -Alex
On Fri, 1 May 2020 at 21:19, Kevin Carter <kecarter@redhat.com> wrote:
Hello Stackers,
As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea is to build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for the TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area.
To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7].
A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 - nova-base: + Kolla: 1.09 GB - new: 720 MB - nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB - keystone: + Kolla: 973 MB - new: 532 MB - memcached: + Kolla: 633 MB - new: 379 MB
While the links shown are many, the actual volume of the proposed change is small, although the impact is massive: * With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times.
We're looking to chart a more reliable path forward, and making the TripleO user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out.
We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned.
Thanks for sharing this Kevin & Emilien. I have mixed feelings about it. Of course it is sad to see a large consumer of the Kolla images move on and build something new, rather than improving common tooling. OTOH, I can see the reasons for doing it, and it has been clear since Martin Andre left the core team that there was not too much interest from Red Hat in pushing the Kolla project forward. I will resist the urge to bikeshed on image size :) I will make an alternative proposal, although I expect it has at least one fatal flaw. Kolla is composed mainly of two parts - a kolla-build Python tool (the aforementioned *glorious* templating engine) and a set of Dockerfile templates. It's quite possible to host your own collection of Dockerfile templates, and pass these to kolla-build via --docker-dir. Tripleo could maintain its own set, and cut out the uninteresting parts over time. The possibly fatal flaw? It wouldn't support the buildah shell format. It is just another template format though, perhaps it could be made to work with minimal changes. The benefit would be that you get to keep the template override format that is (presumably) exposed to users. If you do decide to go (I expect that decision has already been made), please keep us in the loop. We will of course continue to support Tripleo for as long as necessary, but if you could provide us with a timeframe it will help us to plan life after Tripleo. We're also looking for ways to streamline and simplify, and as Alex mentioned, this opens up some new possibilities. At the very least we can drop the tripleoclient image :) Finally, if you could share any analysis that ends up being done on the images, and outcomes from it (e.g. drop these packages), that would be a nice parting gift.
Thanks,
Kevin and Emilien
[0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8] https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcd...
Kevin Carter IRC: kecarter
Hi Mark, On Mon, May 4, 2020 at 5:02 AM Mark Goddard <mark@stackhpc.com> wrote:
Thanks for sharing this Kevin & Emilien. I have mixed feelings about it. Of course it is sad to see a large consumer of the Kolla images move on and build something new, rather than improving common tooling.
Please keep in mind that we're not doing this work because we're seeking for work (we're already pretty busy with some other topics); although the container images discussion has been on the table for a very long time and I think it's the right time to finally take some actions. If we decide to "leave Kolla"; you can count on us if help is needed for anything. For the "build something new rather than improving common tooling"; Alex answered that much better than I can do: "(...) Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases. (...)" The way Kevin and I worked is simple: we met with the folks who actually built the containers for TripleO and acknowledged the gap between the upstream tooling and how it's being consumed downstream. There is a high amount of complexity in the middle that we aim to solve in our proposed solution. For example, one condition of satisfaction of the new tooling is that it has to be directly consumable without having to override anything and the code / configs are 100% upstream. With the Kolla tooling, we are far from it: - TripleO overrides in tripleo-common upstream - overrides in tripleo-common downstream - multiple hacks in tooling to build containers for OSP Not great uh? Yes, someone could blame Red Hat for not contributing all the changes back to Kolla but this isn't that easy if you have been involved in these efforts yourself. For once, we have a strong desire to make it 100% public with no hacks, and it should never affect our TripleO consumers (upstream or downstream). In fact, it'll enable more integration to patch containers from within our Deployment Framework and solve other problems not covered in that thread (hotfix, etc). OTOH, I can see the reasons for doing it, and it has been clear since
Martin Andre left the core team that there was not too much interest from Red Hat in pushing the Kolla project forward.
To be fair, no. The review velocity in Kolla is not bad at all and you folks are always nice to work with. Really.
I will resist the urge to bikeshed on image size :)
So talking about image size in the thread was my idea and I shouldn't have done that. I'll say it again here: the image size wasn't our main driver for that change. It has always been a bonus only. I rebuilt images with our new tooling last night; based on centos8 and I got the same sizes as the Kolla images. So unless we go with ubi8 (which I know could have been done with Kolla as well I'm sure); the image size won't be any smaller. I will make an alternative proposal, although I expect it has at least
one fatal flaw. Kolla is composed mainly of two parts - a kolla-build Python tool (the aforementioned *glorious* templating engine) and a set of Dockerfile templates. It's quite possible to host your own collection of Dockerfile templates, and pass these to kolla-build via --docker-dir. Tripleo could maintain its own set, and cut out the uninteresting parts over time. The possibly fatal flaw? It wouldn't support the buildah shell format. It is just another template format though, perhaps it could be made to work with minimal changes. The benefit would be that you get to keep the template override format that is (presumably) exposed to users.
This should _at very least_ be a documented alternative in the spec proposed by Kevin. I agree we should take a look and I just discussed with Kevin and we'll do it this week and report back. If you do decide to go (I expect that decision has already been made),
please keep us in the loop.
No decision has been made. To be fully transparent, Kevin and I worked on a prototype during 6 days, proposed a spec the 7th day and here we are. We'll do this in the open and again in full transparency; we acknowledge that we have put ourselves in that position but we aim to fix it.
We will of course continue to support Tripleo for as long as necessary, but if you could provide us with a timeframe it will help us to plan life after Tripleo. We're also looking for ways to streamline and simplify, and as Alex mentioned, this opens up some new possibilities. At the very least we can drop the tripleoclient image :)
re: tripleoclient image: let's remove it now. It was never useful. I'll propose a patch this week. Yes we will keep you in the loop and thanks for your willingness to maintain Kolla during our transition. Note that this is a two-side thing. We are also happy to keep working with you, maybe on some other aspects (maintaining centos8, CI, etc). Finally, if you could share any analysis that ends up being done on
the images, and outcomes from it (e.g. drop these packages), that would be a nice parting gift.
Yes I have a bunch of things that I'm not sure they are useful anymore. I'll make sure we document it in etherpad and share it with you asap. Thanks, -- Emilien Macchi
participants (11)
-
Alex Schultz
-
Bogdan Dobrelya
-
Emilien Macchi
-
Jeremy Stanley
-
Kevin Carter
-
Marcin Juszkiewicz
-
Mark Goddard
-
Ruslanas Gžibovskis
-
Sean Mooney
-
Tobias Urdin
-
Tristan Cacqueray