Hi, loci folks, The release management team has been conducting a review of OpenStack official deliverables, to make sure nothing was falling between the cracks release-management-wise. As part of that review, we added a "release-management:" key to deliverables defined in the project governance repository in case they were not to be handled by the release management team using the openstack/releases repository. For example, some deliverables (like docs) are continuously published and therefore marked "release-management:none". Others (like charms) are released manually on 3rd-party platforms and therefore marked "release-management:external". In that context, the situation of the loci deliverable (openstack/loci repository) is unclear. Is it: 1. meant to be released through the openstack/releases repository, but just not ready yet (any hint of when it will be ?) 2. not meant to be "released" or tagged but more like continuously published (in which case we should add "release-management:none" to it) 3. meant to be released, but on a 3rd-party platform like the Docker Hub and not using openstack/releases to drive it (in which case we should add "release-management:external" to it) From previous discussions it appears (2) would best describe the loci situation, but I wanted to doublecheck it was still the case. Thanks, -- Thierry Carrez (ttx)
Chris has proposed to discuss this in LOCI's next community meeting, which happens Friday. More news tomorrow! Regards, Jean-Philippe Evrard (evrardjp)
It’s worthwhile to talk the current goals of Loci and how they relate to the project release. Loci is a set of Dockerfiles and scripts that builds OpenStack service containers from source. Loci can target container builds for three distributions (Ubuntu, CentOS, and Suse) and for all of the major OpenStack services. By default builds are from HEAD of master from each project, but can be set to build from stable project branches. What does this mean for release management of Loci? As of right now, any API changes to Loci remain backwards compatible. If you are using Loci to build container images, you will most likely benefit from running from master. We haven't made an official a release of Loci because that would imply that we believed an end user would benefit from a stable release point. So far, the Loci team has not felt that way. However, there is the possibility that for auditing purposes one would want to know the exact code that was used to build container images. To this end we'll discuss possible release version process at our weekly meeting tomorrow. If you are a user or potential user of Loci and have an opinion about this, please attend the meeting tomorrow (Friday November 30 at 1500 UTC in the #openstack-loci channel) or reply to this thread. There's also the possibility that to add major new features, like building container images from packages rather than source, we will have to introduce major API changes which would be useful to signal to end users. Right now no work has been done to advance package builds. As for artifacts created by Loci. The Loci gate publishes images to Docker Hub as part of its testing process. These are builds of OpenStack services from HEAD of master for each of these projects, and should not be used in production. The Loci team strongly encourages consumers of Loci containers to build and maintain their own image sets. This is considered best practice for security and maintainability. To answer your question, as of right now we are at 2: "not meant to be "released" or tagged but more like continuously published". This may change after the meeting tomorrow. Thanks, Chris
On Nov 29, 2018, at 2:38 AM, Thierry Carrez <thierry@openstack.org> wrote:
Hi, loci folks,
The release management team has been conducting a review of OpenStack official deliverables, to make sure nothing was falling between the cracks release-management-wise.
As part of that review, we added a "release-management:" key to deliverables defined in the project governance repository in case they were not to be handled by the release management team using the openstack/releases repository. For example, some deliverables (like docs) are continuously published and therefore marked "release-management:none". Others (like charms) are released manually on 3rd-party platforms and therefore marked "release-management:external".
In that context, the situation of the loci deliverable (openstack/loci repository) is unclear. Is it:
1. meant to be released through the openstack/releases repository, but just not ready yet (any hint of when it will be ?)
2. not meant to be "released" or tagged but more like continuously published (in which case we should add "release-management:none" to it)
3. meant to be released, but on a 3rd-party platform like the Docker Hub and not using openstack/releases to drive it (in which case we should add "release-management:external" to it)
From previous discussions it appears (2) would best describe the loci situation, but I wanted to doublecheck it was still the case.
Thanks,
-- Thierry Carrez (ttx)
Chris Hoge wrote:
[...] To answer your question, as of right now we are at 2: "not meant to be "released" or tagged but more like continuously published". This may change after the meeting tomorrow.
Looking at the meeting logs it appears the position has not changed. I proposed as a result: https://review.openstack.org/622902 Cheers, -- Thierry Carrez (ttx)
There is a need for us to work out whether Loci is even appropriate for stable branch development. Over the last week or so the CentOS libvirt update has broken all stable branch builds as it introduced an incompatibility between the stable upper contraints of python-libvirt and libvirt. If we start running stable builds, it might provide a useful gate signal for when stable source builds break against upstream distributions. It's something for the Loci team to think about as we work through refactoring our gate jobs.
On Dec 5, 2018, at 2:31 AM, Thierry Carrez <thierry@openstack.org> wrote:
Chris Hoge wrote:
[...] To answer your question, as of right now we are at 2: "not meant to be "released" or tagged but more like continuously published". This may change after the meeting tomorrow.
Looking at the meeting logs it appears the position has not changed. I proposed as a result: https://review.openstack.org/622902
Cheers,
-- Thierry Carrez (ttx)
On Thu, Dec 13, 2018 at 09:43:37AM -0800, Chris Hoge wrote:
There is a need for us to work out whether Loci is even appropriate for stable branch development. Over the last week or so the CentOS libvirt update has broken all stable branch builds as it introduced an incompatibility between the stable upper contraints of python-libvirt and libvirt.
Yup, as we've seen on https://review.openstack.org/#/c/622262 this is a common thing and happens with every CentOS minor release. We're working the update to make sure we don't cause more breakage as we try to fix this thing.
libvirt. If we start running stable builds, it might provide a useful gate signal for when stable source builds break against upstream distributions. It's something for the Loci team to think about as we work through refactoring our gate jobs.
That's interesting idea. Happy to discuss how we can do that in a way that makes sense for each project. How long does LOCI build take? Yours Tony.
On Dec 17, 2018, at 9:44 PM, Tony Breeds <tony@bakeyournoodle.com> wrote:
On Thu, Dec 13, 2018 at 09:43:37AM -0800, Chris Hoge wrote:
There is a need for us to work out whether Loci is even appropriate for stable branch development. Over the last week or so the CentOS libvirt update has broken all stable branch builds as it introduced an incompatibility between the stable upper contraints of python-libvirt and libvirt.
Yup, as we've seen on https://review.openstack.org/#/c/622262 this is a common thing and happens with every CentOS minor release. We're working the update to make sure we don't cause more breakage as we try to fix this thing.
libvirt. If we start running stable builds, it might provide a useful gate signal for when stable source builds break against upstream distributions. It's something for the Loci team to think about as we work through refactoring our gate jobs.
That's interesting idea. Happy to discuss how we can do that in a way that makes sense for each project. How long does LOCI build take?
Loci makes one build for each OpenStack project you want to deploy. The requirements container takes the most time, as it does a pip wheel of every requirement listed in the openstack/requirements repository, then bind-mounts the complete set of wheels into the service containers during those builds to ensure a complete and consistent set of dependencies. Requirements must be done serially, but the rest of the builds can be done in parallel. What I'm thinking is if we do stable builds of Loci that stand up a simplified all-in-one environment we can run Tempest against, we would both get a signal for the Loci stable build (as well as master) and a signal for requirements. Co-gating means we can check that an update to requirements to fix one distrubution does not negatively impact the stability of other distributions. I have some very initial work on this in a personal project (this is how I like to spend some of my holiday down time), and we can bring it up as an agenda item for the Loci meeting tomorrow morning. -Chris
On Thu, 2018-12-20 at 09:58 -0800, Chris Hoge wrote:
On Dec 17, 2018, at 9:44 PM, Tony Breeds <tony@bakeyournoodle.com> wrote:
On Thu, Dec 13, 2018 at 09:43:37AM -0800, Chris Hoge wrote:
There is a need for us to work out whether Loci is even appropriate for stable branch development. Over the last week or so the CentOS libvirt update has broken all stable branch builds as it introduced an incompatibility between the stable upper contraints of python- libvirt and libvirt.
Yup, as we've seen on https://review.openstack.org/#/c/622262 this is a common thing and happens with every CentOS minor release. We're working the update to make sure we don't cause more breakage as we try to fix this thing.
libvirt. If we start running stable builds, it might provide a useful gate signal for when stable source builds break against upstream distributions. It's something for the Loci team to think about as we work through refactoring our gate jobs.
That's interesting idea. Happy to discuss how we can do that in a way that makes sense for each project. How long does LOCI build take?
Loci makes one build for each OpenStack project you want to deploy. The requirements container takes the most time, as it does a pip wheel of every requirement listed in the openstack/requirements repository, then bind-mounts the complete set of wheels into the service containers during those builds to ensure a complete and consistent set of dependencies. Requirements must be done serially, but the rest of the builds can be done in parallel.
What I'm thinking is if we do stable builds of Loci that stand up a simplified all-in-one environment we can run Tempest against, we would both get a signal for the Loci stable build (as well as master) and a signal for requirements. Co-gating means we can check that an update to requirements to fix one distrubution does not negatively impact the stability of other distributions.
I have some very initial work on this in a personal project (this is how I like to spend some of my holiday down time), and we can bring it up as an agenda item for the Loci meeting tomorrow morning.
-Chris
I like the idea of having REAL testing of the loci images. Currently we just install software, and it's up to deployment tools to configure the images to match their needs. Doing a real test for all distros would be very nice, and a positive addition. I am curious about how we'd do this though. I suppose though it might require a new job, which will take far more time: After doing a building of the necessary images (more than one project!), we need to deploy them together and run tempest on them (therefore ensuring proper image building and co-installability). Or did you mean that you wanted to test each image building separately by running the minimum smoke tests for each image? What about reusing a deployment project job that's using loci in an experimental pipeline? Not sure to understand what you have written :) Regards, JP
participants (4)
-
Chris Hoge
-
Jean-Philippe Evrard
-
Thierry Carrez
-
Tony Breeds