[Openstack-operators] Packaging sample config versions

Kris G. Lindgren klindgren at godaddy.com
Mon Dec 15 16:33:24 UTC 2014



On 12/13/14, 8:09 AM, "Thomas Goirand" <zigo at debian.org> wrote:

>On 12/12/2014 02:17 AM, Kris G. Lindgren wrote:
>>> Why do you think it's a good idea to restart doing the work of
>>> distributions by yourself? Why not joining a common effort?
>> 
>> Well for a lot of reasons.  Some of us carry our own patch sets
>
>My employer (ie: Mirantis) also has its own patch set, however, we can
>only work together on "community packages", so I don't really see your
>point here.
>
>>- some of
>> us want to run certain services on trunk.
>
>I also would like to have packages following trunk. One of the reason
>would be to do gating on trunk, working together with the infra team. I
>only lack time to build that CI for the moment, but it shouldn't be very
>hard to build.
>
>> But mainly the bigger push is
>> that if we come up with a standard set of openstack packages - then we
>>can
>> build a standard set of tooling around dealing/installing those
>>packages.
>
>It'd be super nice if the standard would be what's in Debian. I am
>convinced that Debian is the correct place to make it happen, as it's
>not bound to any company or agenda.
That¹s - great.  But other people will want it to be based on RHEL -
others Ubuntu, some people will want it on SLES.  I understand debian
makes your life easier, but at Paris is was already determined that a one
size fits all approach wont work.  Whatever is done needs to handle both
deb packages and rpms.
>
>> The point of this is that we are trying to build that common effort and
>> not leave it up to the distro's to solve all the problems?
>
>How isn't what's happening in Debian not a "common effort"? It's all
>community based. I'm largely not interested in anything RPM based, so
>others could work on that part if they wish.

I never said it wasn't - its just for *ME* what you are doing for debian
packages has very little carry over to what I have to do for rpm packages.
 
>
>>> On 12/09/2014 11:23 AM, Fischer, Matt wrote:
>>>> Not only is the sample
>>>> config a useful reference, but having it in a git repo allows me to
>>>> see when new options were added or defaults were changed.
>>>
>>> Yeah, though if you've been doing packages for a long time, you do have
>>> this information. At least, I do have it for all the last stable
>>>releases.
>> 
>> Great - some of us also are busy actually running clouds here...
>
>I wasn't meaning that "I know" (hopefully, I'm not that pretending). But
>rather "the Git knows" or "the Debian archive knows". In other words,
>the distribution has a history of all .conf.sample over time, which
>makes it possible to know.
>
>>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>>> [...] sample config and replace it with some tox script. Which may or
>>>> may not run on your current operating system without going to pip to
>>>> get newer versions of stuff.
>>>
>>> Excuse my words, but that's bullshit. If your operating system doesn't
>>> have a version high enough for one of your build-dependency, then this
>>> should be fixed. It should be your work as a package maintainer. For
>>> example, if we need a newer version of tox (which is *not* the case, as
>>> tox isn't really needed for generating config files), then the tox
>>> package should be updated. It's as simple as that. And if you want this
>>> to happen, then I would strongly suggest you to get involved in Debian
>>> and/or Fedora packaging, because that's where things happen.
>> 
>> I guess from my point of view - not providing a sample config file for
>> your project is bullshit - unless said project can work without any
>>config
>> file at all (even then its still bullshit).  Providing a script to do
>>it -
>> ok that¹s live-able.  Providing documentation to use tox that installs
>>all
>> the projects deps into a venv then generates a script from those deps
>> which don't match what on the system.  That¹s bullshit as well.
>
>I shouldn't have used "bullshit", I can see you took it badly. Sorry.
>That wasn't aimed to create a strong reaction.
>
>What I wanted to say was that the fact you can't run a specific command
>in your distribution because of an outdated Python module is wrong,
>simply because it's the work of a package maintainer to do updates if
>it's needed.
>
>And no, you don't need a venv. I use tox.ini files as a kind of
>packaging documentation, so I know what shell command to use, and
>everything else happens in debian/rules, with build-dependencies, not
>venv stuff.
>
>>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>> Let me give you an example. In most (if not *all* projects) you will
>>> find some [keystone:auth_token] section. These are directly derived
>>>from
>>> what's in python-keystoneclient. Indeed, the output .conf.sample file
>>> will depend on that... Now let's say the maintainers of
>>> python-keystoneclient would like to update something (let's say, update
>>> a comment on one of the directives, or simply a *new* directive),
>>> obviously, you want that change to propagate on all projects, but
>>>*only*
>>> if we have a new version of python-keystoneclient.
>> 
>> I guess I can get that for staying on trunk.  But when openstack does a
>> stable release... Shouldn't the client versions be pretty well defined?
>> And not just >= 1.4 - but should be capped to some specific version?
>
>No, clients should *never* be capped. And in fact, over the years, I've
>often had to express my strong opinion about the fact that no Python
>module should ever receive a higher bound cap. Indeed, upgrading a
>Python module in a distribution could be motivated by something
>completely unrelated to OpenStack.

Which is one point of view as a packager for debian you need to care about
that stuff.  For other people building openstack packages (ME) they go
onto a machine that are basically only function to house specific version
of openstack and since I am using rhel - the likelyhood that the python
versions changing because of something they did its pretty small. Either
way it looks like their were some recent commits to try to add version
capping to gate: 
https://github.com/openstack/requirements/commit/ad967b1cdb23181de818 - so
apparently I am not the only one thinking this way ;-).
>
>For example the Django maintainer decided to upgrade to 1.7, because of
>upstream LTS support for that version. This broke many things in Horizon
>which we had to deal with: I sent about a dozen patch for it, including
>those for some Python modules not maintained within OpenStack itself.
>
>Maintaining OpenStack is *mainly* about dealing with this sorts of
>issues. The packages of core OpenStack projects themselves are rather
>easy to deal with, and that's only the most fun 10% part unfortunately.
>
>>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>>> The problem that I have with this change is
>>>> that in order to package a sample configuration I need to basically do
>>>> the following:
>>>> 1.) get the current build/commit run
>>>> 2.) run python build
>>>
>>> No, you *never* need to care about this in the context of a package
>>> building (at least in Debian, this may happen by itself with the dh
>>>auto
>>> sequencer).
>> 
>> So... You don't do something like:
>> 
>>https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/
>>sp
>> ecs/openstack-nova.spec#L393-L394
>> That¹s interesting....
>
>Do you mean: do I run unit tests? Of course I do. But this doesn't
>include "run python build". For nova, here's the debian/rules part that
>does it:
>
>override_dh_auto_test:
>ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))
>        ./run_tests.sh -N -P
>endif
>
>though maybe I should directly use testr these days...
>
>>>> 3.) strip away the relevant built parts of the package
>>>
>>> I never did have to "strip away" anything. What are you talking about
>>> exactly here?
>> 
>> Should be irrelevant... Sorry...
>> Something like: 
>> 
>>https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/
>>sp
>> ecs/openstack-nova.spec#L407
>
>If what you're thinking about is stripping out the part that builds the
>sphinx doc, I wouldn't agree with doing that.
>
>Then having a look to it, this Anvil repository seems very much Red Hat
>centric. I'm not really convince that moving all the spec files into a
>single repository helps (I took the opposite approach of having one git
>repository per package, like everyone is doing in Debian anyway). And
>also it seems to consider only core project packages, which is IMO a
>very small part of the needed work. Also, it seems to mix deployment
>stuff and packaging, which doesn't seem the best approach either.

That¹s because it figures out the ptyhon dependenices and packages them
for me.   Which was already stated another mainling list.  Only a few
python modules have problems being packaged this way - the others can be
packaged using a pretty standard template.  I can tell you that in 1+ year
of deploying openstack from my own packages I have built exactly 0 python
dependent packages (and I am not using the RDO repo to avoid building them
either).  Then again - I don¹t have to deal with distributions level
issues since my servers that get openstack packages are basically single
purpose.  I believe this is why their is also desires to put openstack
into venv's for each service or do containerization around an openstack
service.  Their are tools out there that do exactly this (giftwrap,anvil)
I am sure of some others that I am forgetting at this point.
>
>>>> 4.) install on the build machine all the python runtime deps that I
>>>> am going to need for this package
>>>
>>> You don't need to do that. This is taken care of by your
>>> build-dependencies (which btw are not trivial to get right). And your
>>> build environment (cowbuilder in my case, but maybe sbuild on the
>>>buildd
>>> network or many others) will install them automatically. If some
>>> (build-)dependencies are missing, just fix that...
>> 
>> Anvil figures out the build dependencies for me
>
>As much as I know, there's no automated tool that can do the manual work
>of figuring out (build-)dependencies correctly. This has to be a manual
>work, because of the fact one needs to check what version is currently
>available on the current stable distribution and sometimes just avoid
>version dependencies when possible.

Again - For me Anvil does this, other people have created giftwrap and I
am sure their are some other projects that auto figure out dependencies
builds them and packages them.  But with anvil over the past year I have
built exactly 0 python dependent packages by hand.  I have updated exactly
0 spec files for those updated dependancies as well.  Anvil figures out
the requirements across all the openstack projects you are building.   It
then, if need be, fixes conflicting version requirements (I am sure that
you know about this in the past where one project wanted a version that
conflicted with another project).  Then it builds and packages the
dependent packages, then the openstack specific packages.  In the end I
have two repos, the repo with just the core packages in it and then a
dependency package repo.
>
>> - it also figures out if
>> those deps are satisfiable via yum, if not downloads the version from
>>pip
>> and packages that.
>
>Doing automated packaging, while being a good idea at first, doesn't
>work in practice. Every package has specific things, like support or not
>for Python3 (sometimes, a few patches are needed to make it work in
>Python 3, and I do lots of them), and different ways to run the unit
>tests. Also, for some packages, you also need to package its dependency
>(sometimes, it is a recursive work, and for a single Python module, you
>may end-up packaging 10 other modules, sometimes just to be able to run
>the unit tests). So basically: I don't think this approach works.

So far 1+ year, 3 versions of openstack, and a distro change tell me that
you are wrong because it has worked for *ME*.
>
>> So for me requirements is pretty trivial.
>
>It unfortunately isn't at all.
>
>> But to this
>> point and your previous one where you ran everything on a vm to figure
>>out
>> WTF it was doing.  The point is now to get a sample config I need to as
>> part of the package build process - build/install all the *RUNTIME* deps
>> for that service on the box so I can generate a config file.
>
>I'm not sure I'm following. But for every package, you need to install
>absolutely all build AND runtime dependencies, and if they don't exist
>in the distribution, you also need to package them.
>
>> What I am getting from you is that you basically install all the runtime
>> deps for the project on your build machine and then build the config
>>using
>> the bash script.
>
>Runtime dependencies are needed to run the unit tests, so yes, they get
>installed, since the unit tests are run at package build time. On top of
>that, you have specific build-dependencies for running the unit tests,
>and possibly other things (like the config file generation which we're
>talking about right now, or the sphinx docs, for example).

We (me?) do testing/unit testing completely separate from packaging.  As I
would rather run tests against the system as I am deploying it vs's build
time.  Right/wrong/other wise that¹s how I am doing it.

>
>>>> 8.) add sample configuration generated in step 6 to the package.
>>>
>>> Why wouldn't the process of building your package include building a
>>> sample configuration file? I don't get your reasoning here...
>> 
>> Getting to the point that what tox installs and what are available on
>>the
>> system are different. So the only way to get a valid config is to either
>> use the same package versions that tox does or to duplicate what "tox"
>>is
>> suppose to do for you.
>
>I'm sorry, but I have to say that this is incorrect. What you need to
>have installed, is the exact same environment where you package will
>run, not what tox wants. Which is why you should avoid at all costs
>running some pip install things, because doing so, you may end up having
>a wrong .conf.sample file (which will not match the Python modules that
>your package will run on).

Sounds to me like we agreed.  You are installing all the dependent python
modules on the build server - then generating the config.  That's also
what tox does, except it does it in a venv and doesn't care about what
system level versions you have.  So you are "duplicating" what tox is
doing for you.  I get the end result is different you get a config file
that exactly matches what versions of modules you are packaging and with
tox the end result may not be usable.
>
>> Which sounds like you duplicate what tox does for
>> you to avoid that mess.
>
>That's not the only reason. The other is that, by policy, you should
>*never ever* need an internet connection to build a package. This is a
>strong Debian policy requirement, which I both agree on and support on
>the technical level for all of the packages I produce. If there is a
>package for which that's not the case, then this is a bug which shall be
>reported on the Debian bug tracker.

Again - I don't have this restriction and I guess because of it - building
packages and dependent packages is apparently much easier for me.
>
>>>> Then I need to make sure I also package all of the python-versions
>>>> that was used in step 4, making sure that I don¹t have conflicting
>>>> system level dependencies from other openstack projects.
>>>
>>> Of course all build-dependencies and runtime dependencies need to be
>>> packaged, and available in the system. That's the basics of packaging,
>>> no? Making sure this happens is about 90% of my Debian packaging work.
>>> So far, I haven't seen anyone in the community volunteering to help on
>>> packaging Python modules. Why not focus on that rather than wasting
>>>your
>>> time on non-issues such as generating sample config files? I'd
>>> appreciate a lot some help you know...
>> 
>> That is what this effort is for? Coming up with tooling to package
>> openstack and its python modules and if we can't simply include a sample
>> config like we have done for the past 3 years, then the tooling (which
>>we
>> are trying to consolidate) should help us here.
>
>As I wrote before: you need runtime deps for unit tests. Again, I see no
>issue with the current way config files are generated.
>
>>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>>> I don¹t think its too much to ask for each project to include a
>>>> script that will build a venv that includes tox and the other
>>>> relevant deps to build the sample configuration.
>>>
>>> A venv, seriously?!?
>>>
>>> No, it's not that. What need to happen is to have an easy and
>>>*OpenStack
>>> unified way* of building the config files, and preferably with things
>>> integrated directly in a oslo.config new command line. Not just a black
>>> magic tox thing, but something documented. But I believe that's already
>>> the direction that is being taken (did you notice
>>> /usr/bin/oslo-config-generator ?).
>> 
>> Or projects could maintain a configuration file...  But I guess if
>> everyone uses the bash script for config generation then I could work
>>with
>> that...
>
>Not everyone does. Canonical people don't really care and don't ship a
>full nova.conf.sample with their package. On my side, I only provide it
>as a documentation, and produce my own nova.conf (which gets installed
>in /etc/nova) with defaults which I find convenient and close to the
>install-guide. I'm open to discussion about this though, maybe it'd be
>best to install a (modified with better defaults) full nova.conf.sample
>as /etc/nova/nova.conf directly.

Honestly - I have no preference on this.  I am going to change the
defaults anyway to customize it to something that works for me.  So either
way I need to modify the config file and the configuration management
stuff does this for me.  So if people want to put extra work into making a
config file that they think is going to work well with everyone please do
so, to me I think that¹s a dead end.  What I would rather see is feedback
from operators to dev to get some defaults changed to more reasonable
values.  I am specifically thinking about certain periodic tasks that
everyone after a certain size is either changing or turning off.

>
>Cheers,
>
>Thomas Goirand (zigo)
>
>
>_______________________________________________
>OpenStack-operators mailing list
>OpenStack-operators at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




More information about the OpenStack-operators mailing list