[openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

Thomas Goirand zigo at debian.org
Tue Apr 19 12:59:19 UTC 2016

On 04/19/2016 01:01 PM, Chris Dent wrote:
> We also, however, need to consider what the future might look like and
> at least for some people and situations

I agree.

> the future does not involve
> debs or rpms of OpenStack: packages and distributions are more trouble
> than they are worth when you want to be managing your infrastructure
> across multiple, isolated environments.

But here, I don't. It is my view that best, for containers, is to build
them using distribution packages.

> In that case you want
> puppet, ansible, chef, docker, k8s, etc.
> Sure all those things _can_ use packages to do their install but for
> at least _some_ people that's redundant: deploy from a tag, branch,
> commit or head of a repo.
> That's for _some_ people.

This thinking (ie: using pip and trunk, always) was one of the reason
for TripleO to fail, and they went back to use packages. Can we learn
from the past?

There are multiple things which are done at the packaging level, like
unit testing at build time, validation of the whole repo using Tempest,
dependencies which aren't Python modules, stuff in postinst (like
managing system users, lock folders, log files, config files, etc.).
Yes, you can hack everything with puppet or shell scripts, but you will
end up reinventing the wheel, for packages which have been proven to be
working since the beginning of OpenStack. On top of that, you will not
have all the QA that is invested in packages (piuparts, lintian, etc.).

I've also seen a lot of people attempting to do automated packaging. A
lot of people. Almost always, the outcome is that there's simply too
many exception to manage, so finally, it becomes super complicated
scripts. There's simply no way you can move the human reviews out.

Instead of doing that, what I am hoping to do is having as many things
automated, including building packages from trunk, and managing
dependencies, but instead of having it produce a definitive result which
I know has a lot of chances to be broken, I would like to implement a
proposal bot for packaging. What I envision is a proposal bot which will:
- Check what version of components are available in the current
OpenStack release Debian repository (probably using madison-lite), and
propose update so that packages always (build-)depends on the latest
version that is packaged.
- Check the (test-)requirements.txt and propose updates to
debian/control, like adding missing new dependencies, and removing
deprecated ones.

This is already partly implemented in misc/pkgos-parse-requirements in
openstack-pkg-tools, though it doesn't work anymore since our
requirements.txt are now a lot more complex. So it would need a refresh,
and probably a refactor.

All this will bring us closer to the "package and deploy from trunk"
which many teams (including the containers people, the puppet team, and
Fuel contributors) would like to achieve.

If we do all of the above, do you still think that we need to deploy
using virtualenv and/or pip install?

The thing is, if we continue to be able to do the above, by having
workable global-requirements, it wont change the fact that you can do
what you want with containers, and not use packages if you don't want to.

> Keep in mind that I'm presenting my own opinion here. I'm quite sure
> it is different from, for example, Doug's. It's easy to conflate
> arguments such that they appear the same. My position is radical (in
> the sense that it is seeking to resolve root causes and institute
> fundamental change), I suspect Doug's is considerably more moderate
> and evolutionary.

That's what it seems, yes.

> Do I expect it all to
> happen like I want? No. Do I hope that my concerns will be
> integrated in the discussion? Yes.

I fail to see what kind of concerns you have with the current situation.
Could you attempt to make me understand better? What exactly is wrong
for the type of use case you're discussing?

>> Remember that in distros, there's only a single version of a library at
>> any given time, at the exception of transitions (yes, in Red Hat it's
>> technically possible to install multiple versions, but the policy
>> strongly advocates against this).
> Yes, which is part of why distros and packaging are limiting in the
> cloud environment. When there is a security issue or other bug we don't
> want to update those libraries via packages, nor update those libraries
> across a bunch of different containers or what not.
> We simply want to destroy the deployment and redeploy. With new
> stuff. We want to do that without co-dependencies.
> In other words, packaging of OpenStack into rpms and debs is a short
> branch on the tree of life.

What you're talking about (ie: using containers to be able to do atomic
upgrades and rollbacks) is also possible if you use packages inside your
containers. It has also nothing to do with allowing conflicting python
module versions within the global-requirements.

>> Most users are consuming packages from distributions. Also, if you're
>> using containers, probably you will also prefer using these packages to
>> build your containers: that's the most easy, safe and fast way to build
>> your containers.
> I predict that that is not going to last.

That's what everyone says, but I'm convinced the majority will be proven
wrong! :)

>>> If all-in-one installations are important to the rpm- and deb- based
>>> distributions  then _they_ should be resolving the dependency issues
>>> local to their own infrastructure
>> Who is "they"? Well, approximately 1 person full time for Debian, 1 for
>> Ubuntu if you combine Corey and David (and Debian and Ubuntu
>> dependencies are worked on together in Debian), so that's 2 full time
>> for Ubuntu/Debian. Maybe 2 and a half for RDO if what Haikel told me is
>> still accurate.
> Yes, this is a significant issue and I think is one of the very
> complicated aspects of the strange economics of OpenStack. It's been
> clear from the start that the amount of people involved at the
> distro level has been far too low to satisfy the requirements of the
> users of those distributions. Far.
> However, that problem is, I think, orthogonal to the question of
> effectively creating OpenStack at the upstream (pre-packaging) level.

The point that I was trying to make is that, if upstream stops trying to
accommodate downstream distributions (which so far, worked *very* well,
thanks to everyone contributing non-negligible time and effort), then
probably it wont be possible to package OpenStack in downstream distros,
if we consider the amount of added efforts we'll have to add and the way
we're staffed.

> The problem with all that is that co-installation is _bad_. Don't do
> that! If we recognize that a lot of these problems go away.

But the way your describing also brings it's list of problems that
aren't trivial to solve.

The biggest concern is security updates, as we already discussed. But
that's not the only one. The overall footprint of OpenStack will also
grow a lot as you will have duplicates of many things running on your
system. Keeping tracks of what is installed where will be at least a
challenge, at worse, impossible to manage.

>> 2/ get a lot of dependencies out. Yes, let's really do that! We're
>> constantly adding more, I've seen only very few going away. Currently
>> the global-requirements.txt for stable/mitaka is 377 lines long. If you
>> grep out "client", "oslo" and "os-", "openstack", comments, empty lines,
>> and a few others, there's about 260 dependencies left. That's insane at
>> multiple levels. When are we going to see a movement to clean this mess,
>> and define real OpenStack standards?
> That list looks long because it is used for many many projects that
> strive for co-installability. If each project has its own list, the
> lists are, individually, much shorter.

Oh, actually, that's a very good point! We could move these to
individual packages, even if we keep co-installability. For example,
python-pulp could be removed as it's only used by Congress. Yaql could
move to Murano. Etc.

This is also orthogonal to having co-installability and using containers
/ venvs. If this removes a lot of work from the release team, then I'm
all for it.

If we take such a decision, however, there's only one concern: we must
find a way so that downstream package maintainers like myself are pinged
whenever a new dependency is added, so we can prepare it in advance of
the release. For example, if Congress decided to add a new python module
which nobody uses, and which isn't packaged in Debian, I would hope for
the Congress team to warn me about it, so that the package can have
enough time to go through the Debian NEW queue, in time for the final
release. I guess RDO and SuSE people would have the same concern. I'm
not sure how to implement this in such a way that the procedure is
enforced. Ideas welcome.

> If we want there to be a big-tent (a debate worth having) then we're
> going to have loads of projects, loads of requirements and a
> distinct lack of standards. We're going to have to evolve to cope
> with that.

I'm all for evolving. Though as we grow, it is my strong believe that
our standards should become even more strong and enforced. They
shouldn't get weaker, or we'll get in trouble over time.


Thomas Goirand (zigo)

More information about the OpenStack-dev mailing list