[openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

Sean Dague sean at dague.net
Mon May 15 18:47:24 UTC 2017


On 05/15/2017 01:52 PM, Michał Jastrzębski wrote:
> For starters, I want to emphasize that fresh set of dockerhub images
> was one of most requested features from Kolla on this summit and few
> other features more or less requires readily-available docker
> registry. Features like full release upgrade gates.
> 
> This will have numerous benefits for users that doesn't have resources
> to put sophisticated CI/staging env, which, I'm willing to bet, is
> still quite significant user base. If we do it correctly (and we will
> do it correctly), images we'll going to push will go through series of
> gates which we have in Kolla (and will have more). So when you pull
> image, you know that it was successfully deployed within scenerios
> available in our gates, maybe even upgrade and increase scenerio
> coverage later? That is a huge benefit for actual users.

That concerns me quite a bit. Given the nature of the patch story on
containers (which is a rebuild), I really feel like users should have
their own build / CI pipeline locally to be deploying this way. Making
that easy for them to do, is great, but skipping that required local
infrastructure puts them in a bad position should something go wrong.

I do get that many folks want that, but I think it builds in a set of
expectations that it's not possible to actually meet from an upstream
perspective.

> On 15 May 2017 at 10:34, Doug Hellmann <doug at doughellmann.com> wrote:
>> Last week at the Forum we had a couple of discussions about
>> collaboration between the various teams building or consuming
>> container images. One topic that came up was deciding how to publish
>> images from the various teams to docker hub or other container
>> registries. While the technical bits seem easy enough to work out,
>> there is still the question of precedence and whether it's a good
>> idea to do so at all.
>>
>> In the past, we have refrained from publishing binary packages in
>> other formats such as debs and RPMs. (We did publish debs way back
>> in the beginning, for testing IIRC, but switched away from them to
>> sdists to be more inclusive.) Since then, we have said it is the
>> responsibility of downstream consumers to build production packages,
>> either as distributors or as a deployer that is rolling their own.
>> We do package sdists for python libraries, push some JavaScript to
>> the NPM registries, and have tarballs of those and a bunch of other
>> artifacts that we build out of our release tools.  But none of those
>> is declared as "production ready," and so the community is not
>> sending the signal that we are responsible for maintaining them in
>> the context of production deployments, beyond continuing to produce
>> new releases when there are bugs.
> 
> So for us that would mean something really hacky and bad. We are
> community driven not company driven project. We don't have Red Hat or
> Canonical teams behind us (we have contributors, but that's
> different).
> 
>> Container images introduce some extra complexity, over the basic
>> operating system style packages mentioned above. Due to the way
>> they are constructed, they are likely to include content we don't
>> produce ourselves (either in the form of base layers or via including
>> build tools or other things needed when assembling the full image).
>> That extra content means there would need to be more tracking of
>> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> as needed.
> 
> We can do this by building daily, which was the plan in fact. If we
> build every day you have at most 24hrs old packages, CVEs and things
> like that on non-openstack packages are still maintained by distro
> maintainers.

There have been many instances where 24 hours wasn't good enough as
embargoes end up pretty weird in terms of when things hit mirrors. It
also assumes that when a CVE hits some other part of the gate or
infrastructure isn't wedged so that it's not possible to build new
packages. Or the capacity demands happen during a feature freeze, with
tons of delay in there. There are many single points of failure in this
process.

>> Given our security and stable team resources, I'm not entirely
>> comfortable with us publishing these images, and giving the appearance
>> that the community *as a whole* is committing to supporting them.
>> I don't have any objection to someone from the community publishing
>> them, as long as it is made clear who the actual owner is. I'm not
>> sure how easy it is to make that distinction if we publish them
>> through infra jobs, so that may mean some outside process. I also
>> don't think there would be any problem in building images on our
>> infrastructure for our own gate jobs, as long as they are just for
>> testing and we don't push those to any other registries.
> 
> Today we use Kolla account for that and I'm more than happy to keep it
> this way. We license our code with ASL which gives no guarantees.
> Containers will be licensed this way too, so they're available as-is
> and "production readiness" should be decided by everyone who runs it.
> That being said what we *can* promise is that our containers passed
> through more or less rigorous gates and that's more than most of
> packages/self-built containers ever do. I think that value would be
> appreciated by small to mid companies that just want to work with
> openstack and don't have means to spare teams/resources for CI.

Our upstream gating is pretty limited on the scale of the environment, I
really think we're doing people a disservice to encourage doing this
kind of deployment of OpenStack without a local CI pipeline. Local
environments are so varied, and without a local CI mechanism I think
this is going to end in lots of tears.

>> I'm raising the issue here to get some more input into how to
>> proceed. Do other people think this concern is overblown? Can we
>> mitigate the risk by communicating through metadata for the images?
>> Should we stick to publishing build instructions (Dockerfiles, or
>> whatever) instead of binary images? Are there other options I haven't
>> mentioned?
> 
> Today we do publish build instructions, that's what Kolla is. We also
> publish built containers already, just we do it manually on release
> today. If we decide to block it, I assume we should stop doing that
> too? That will hurt users who uses this piece of Kolla, and I'd hate
> to hurt our users:(

Having been part of the postgresql deprecation discussions where it was
clear far more was read into various support statements than was true,
and some very large scale decisions got made with bad information, there
are worse things than not doing things for our users. It's not giving
them the correct set of expectations, and having them build out assuming
more support than they really have.

I'm definitely with Doug, publishing actual docker images feels like the
wrong direction.

	-Sean

-- 
Sean Dague
http://dague.net



More information about the OpenStack-dev mailing list