[Openstack-operators] Packaging sample config versions
Thomas Goirand
zigo at debian.org
Sat Dec 13 15:09:14 UTC 2014
On 12/12/2014 02:17 AM, Kris G. Lindgren wrote:
>> Why do you think it's a good idea to restart doing the work of
>> distributions by yourself? Why not joining a common effort?
>
> Well for a lot of reasons. Some of us carry our own patch sets
My employer (ie: Mirantis) also has its own patch set, however, we can
only work together on "community packages", so I don't really see your
point here.
>- some of
> us want to run certain services on trunk.
I also would like to have packages following trunk. One of the reason
would be to do gating on trunk, working together with the infra team. I
only lack time to build that CI for the moment, but it shouldn't be very
hard to build.
> But mainly the bigger push is
> that if we come up with a standard set of openstack packages - then we can
> build a standard set of tooling around dealing/installing those packages.
It'd be super nice if the standard would be what's in Debian. I am
convinced that Debian is the correct place to make it happen, as it's
not bound to any company or agenda.
> The point of this is that we are trying to build that common effort and
> not leave it up to the distro's to solve all the problems?
How isn't what's happening in Debian not a "common effort"? It's all
community based. I'm largely not interested in anything RPM based, so
others could work on that part if they wish.
>> On 12/09/2014 11:23 AM, Fischer, Matt wrote:
>>> Not only is the sample
>>> config a useful reference, but having it in a git repo allows me to
>>> see when new options were added or defaults were changed.
>>
>> Yeah, though if you've been doing packages for a long time, you do have
>> this information. At least, I do have it for all the last stable releases.
>
> Great - some of us also are busy actually running clouds here...
I wasn't meaning that "I know" (hopefully, I'm not that pretending). But
rather "the Git knows" or "the Debian archive knows". In other words,
the distribution has a history of all .conf.sample over time, which
makes it possible to know.
>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>> [...] sample config and replace it with some tox script. Which may or
>>> may not run on your current operating system without going to pip to
>>> get newer versions of stuff.
>>
>> Excuse my words, but that's bullshit. If your operating system doesn't
>> have a version high enough for one of your build-dependency, then this
>> should be fixed. It should be your work as a package maintainer. For
>> example, if we need a newer version of tox (which is *not* the case, as
>> tox isn't really needed for generating config files), then the tox
>> package should be updated. It's as simple as that. And if you want this
>> to happen, then I would strongly suggest you to get involved in Debian
>> and/or Fedora packaging, because that's where things happen.
>
> I guess from my point of view - not providing a sample config file for
> your project is bullshit - unless said project can work without any config
> file at all (even then its still bullshit). Providing a script to do it -
> ok that’s live-able. Providing documentation to use tox that installs all
> the projects deps into a venv then generates a script from those deps
> which don't match what on the system. That’s bullshit as well.
I shouldn't have used "bullshit", I can see you took it badly. Sorry.
That wasn't aimed to create a strong reaction.
What I wanted to say was that the fact you can't run a specific command
in your distribution because of an outdated Python module is wrong,
simply because it's the work of a package maintainer to do updates if
it's needed.
And no, you don't need a venv. I use tox.ini files as a kind of
packaging documentation, so I know what shell command to use, and
everything else happens in debian/rules, with build-dependencies, not
venv stuff.
>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>> Let me give you an example. In most (if not *all* projects) you will
>> find some [keystone:auth_token] section. These are directly derived from
>> what's in python-keystoneclient. Indeed, the output .conf.sample file
>> will depend on that... Now let's say the maintainers of
>> python-keystoneclient would like to update something (let's say, update
>> a comment on one of the directives, or simply a *new* directive),
>> obviously, you want that change to propagate on all projects, but *only*
>> if we have a new version of python-keystoneclient.
>
> I guess I can get that for staying on trunk. But when openstack does a
> stable release... Shouldn't the client versions be pretty well defined?
> And not just >= 1.4 - but should be capped to some specific version?
No, clients should *never* be capped. And in fact, over the years, I've
often had to express my strong opinion about the fact that no Python
module should ever receive a higher bound cap. Indeed, upgrading a
Python module in a distribution could be motivated by something
completely unrelated to OpenStack.
For example the Django maintainer decided to upgrade to 1.7, because of
upstream LTS support for that version. This broke many things in Horizon
which we had to deal with: I sent about a dozen patch for it, including
those for some Python modules not maintained within OpenStack itself.
Maintaining OpenStack is *mainly* about dealing with this sorts of
issues. The packages of core OpenStack projects themselves are rather
easy to deal with, and that's only the most fun 10% part unfortunately.
>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>> The problem that I have with this change is
>>> that in order to package a sample configuration I need to basically do
>>> the following:
>>> 1.) get the current build/commit run
>>> 2.) run python build
>>
>> No, you *never* need to care about this in the context of a package
>> building (at least in Debian, this may happen by itself with the dh auto
>> sequencer).
>
> So... You don't do something like:
> https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/sp
> ecs/openstack-nova.spec#L393-L394
> That’s interesting....
Do you mean: do I run unit tests? Of course I do. But this doesn't
include "run python build". For nova, here's the debian/rules part that
does it:
override_dh_auto_test:
ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))
./run_tests.sh -N -P
endif
though maybe I should directly use testr these days...
>>> 3.) strip away the relevant built parts of the package
>>
>> I never did have to "strip away" anything. What are you talking about
>> exactly here?
>
> Should be irrelevant... Sorry...
> Something like:
> https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/sp
> ecs/openstack-nova.spec#L407
If what you're thinking about is stripping out the part that builds the
sphinx doc, I wouldn't agree with doing that.
Then having a look to it, this Anvil repository seems very much Red Hat
centric. I'm not really convince that moving all the spec files into a
single repository helps (I took the opposite approach of having one git
repository per package, like everyone is doing in Debian anyway). And
also it seems to consider only core project packages, which is IMO a
very small part of the needed work. Also, it seems to mix deployment
stuff and packaging, which doesn't seem the best approach either.
>>> 4.) install on the build machine all the python runtime deps that I
>>> am going to need for this package
>>
>> You don't need to do that. This is taken care of by your
>> build-dependencies (which btw are not trivial to get right). And your
>> build environment (cowbuilder in my case, but maybe sbuild on the buildd
>> network or many others) will install them automatically. If some
>> (build-)dependencies are missing, just fix that...
>
> Anvil figures out the build dependencies for me
As much as I know, there's no automated tool that can do the manual work
of figuring out (build-)dependencies correctly. This has to be a manual
work, because of the fact one needs to check what version is currently
available on the current stable distribution and sometimes just avoid
version dependencies when possible.
> - it also figures out if
> those deps are satisfiable via yum, if not downloads the version from pip
> and packages that.
Doing automated packaging, while being a good idea at first, doesn't
work in practice. Every package has specific things, like support or not
for Python3 (sometimes, a few patches are needed to make it work in
Python 3, and I do lots of them), and different ways to run the unit
tests. Also, for some packages, you also need to package its dependency
(sometimes, it is a recursive work, and for a single Python module, you
may end-up packaging 10 other modules, sometimes just to be able to run
the unit tests). So basically: I don't think this approach works.
> So for me requirements is pretty trivial.
It unfortunately isn't at all.
> But to this
> point and your previous one where you ran everything on a vm to figure out
> WTF it was doing. The point is now to get a sample config I need to as
> part of the package build process - build/install all the *RUNTIME* deps
> for that service on the box so I can generate a config file.
I'm not sure I'm following. But for every package, you need to install
absolutely all build AND runtime dependencies, and if they don't exist
in the distribution, you also need to package them.
> What I am getting from you is that you basically install all the runtime
> deps for the project on your build machine and then build the config using
> the bash script.
Runtime dependencies are needed to run the unit tests, so yes, they get
installed, since the unit tests are run at package build time. On top of
that, you have specific build-dependencies for running the unit tests,
and possibly other things (like the config file generation which we're
talking about right now, or the sphinx docs, for example).
>>> 8.) add sample configuration generated in step 6 to the package.
>>
>> Why wouldn't the process of building your package include building a
>> sample configuration file? I don't get your reasoning here...
>
> Getting to the point that what tox installs and what are available on the
> system are different. So the only way to get a valid config is to either
> use the same package versions that tox does or to duplicate what "tox" is
> suppose to do for you.
I'm sorry, but I have to say that this is incorrect. What you need to
have installed, is the exact same environment where you package will
run, not what tox wants. Which is why you should avoid at all costs
running some pip install things, because doing so, you may end up having
a wrong .conf.sample file (which will not match the Python modules that
your package will run on).
> Which sounds like you duplicate what tox does for
> you to avoid that mess.
That's not the only reason. The other is that, by policy, you should
*never ever* need an internet connection to build a package. This is a
strong Debian policy requirement, which I both agree on and support on
the technical level for all of the packages I produce. If there is a
package for which that's not the case, then this is a bug which shall be
reported on the Debian bug tracker.
>>> Then I need to make sure I also package all of the python-versions
>>> that was used in step 4, making sure that I don’t have conflicting
>>> system level dependencies from other openstack projects.
>>
>> Of course all build-dependencies and runtime dependencies need to be
>> packaged, and available in the system. That's the basics of packaging,
>> no? Making sure this happens is about 90% of my Debian packaging work.
>> So far, I haven't seen anyone in the community volunteering to help on
>> packaging Python modules. Why not focus on that rather than wasting your
>> time on non-issues such as generating sample config files? I'd
>> appreciate a lot some help you know...
>
> That is what this effort is for? Coming up with tooling to package
> openstack and its python modules and if we can't simply include a sample
> config like we have done for the past 3 years, then the tooling (which we
> are trying to consolidate) should help us here.
As I wrote before: you need runtime deps for unit tests. Again, I see no
issue with the current way config files are generated.
>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>> I don’t think its too much to ask for each project to include a
>>> script that will build a venv that includes tox and the other
>>> relevant deps to build the sample configuration.
>>
>> A venv, seriously?!?
>>
>> No, it's not that. What need to happen is to have an easy and *OpenStack
>> unified way* of building the config files, and preferably with things
>> integrated directly in a oslo.config new command line. Not just a black
>> magic tox thing, but something documented. But I believe that's already
>> the direction that is being taken (did you notice
>> /usr/bin/oslo-config-generator ?).
>
> Or projects could maintain a configuration file... But I guess if
> everyone uses the bash script for config generation then I could work with
> that...
Not everyone does. Canonical people don't really care and don't ship a
full nova.conf.sample with their package. On my side, I only provide it
as a documentation, and produce my own nova.conf (which gets installed
in /etc/nova) with defaults which I find convenient and close to the
install-guide. I'm open to discussion about this though, maybe it'd be
best to install a (modified with better defaults) full nova.conf.sample
as /etc/nova/nova.conf directly.
Cheers,
Thomas Goirand (zigo)
More information about the OpenStack-operators
mailing list