[Openstack-operators] Packaging sample config versions

Thomas Goirand zigo at debian.org
Tue Dec 16 08:23:18 UTC 2014


On 12/16/2014 12:33 AM, Kris G. Lindgren wrote:
> That¹s - great.  But other people will want it to be based on RHEL -
> others Ubuntu, some people will want it on SLES.  I understand debian
> makes your life easier, but at Paris is was already determined that a one
> size fits all approach wont work.  Whatever is done needs to handle both
> deb packages and rpms.

Sure! Do your RPMs if you like, though I'd be happy to have new
contributors to the Debian side.

>>> I guess I can get that for staying on trunk.  But when openstack does a
>>> stable release... Shouldn't the client versions be pretty well defined?
>>> And not just >= 1.4 - but should be capped to some specific version?
>>
>> No, clients should *never* be capped. And in fact, over the years, I've
>> often had to express my strong opinion about the fact that no Python
>> module should ever receive a higher bound cap. Indeed, upgrading a
>> Python module in a distribution could be motivated by something
>> completely unrelated to OpenStack.
> 
> Which is one point of view as a packager for debian you need to care about
> that stuff.  For other people building openstack packages (ME) they go
> onto a machine that are basically only function to house specific version
> of openstack and since I am using rhel - the likelyhood that the python
> versions changing because of something they did its pretty small. Either
> way it looks like their were some recent commits to try to add version
> capping to gate: 
> https://github.com/openstack/requirements/commit/ad967b1cdb23181de818 - so
> apparently I am not the only one thinking this way ;-).

This kind of capping is fine. What's not fine is to cap let's say
Django<1.7 or SQLAlchemy<=0.7.99 (to take real examples of the past
which I had to deal with) and believe that OpenStack is alone, and the
distribution will adapt. Even in the Red Hat world, that's not what is
happening.

>> Then having a look to it, this Anvil repository seems very much Red Hat
>> centric. I'm not really convince that moving all the spec files into a
>> single repository helps (I took the opposite approach of having one git
>> repository per package, like everyone is doing in Debian anyway). And
>> also it seems to consider only core project packages, which is IMO a
>> very small part of the needed work. Also, it seems to mix deployment
>> stuff and packaging, which doesn't seem the best approach either.
> 
> That¹s because it figures out the ptyhon dependenices and packages them
> for me.  Which was already stated another mainling list. Only a few
> python modules have problems being packaged this way - the others can be
> packaged using a pretty standard template. I can tell you that in 1+ year
> of deploying openstack from my own packages I have built exactly 0 python
> dependent packages (and I am not using the RDO repo to avoid building them
> either). Then again - I don¹t have to deal with distributions level
> issues since my servers that get openstack packages are basically single
> purpose.

Sure, you may achieve some result the way you do which may be ok for
what you do. Thought it will always be a more limited scope by doing
things this way. Also, having packages within a distribution like Debian
ensure there's a better quality, thanks to the multiple tests that are
done archive wide, a single bug report place for users not only of
OpenStack, and more peer review.

> I believe this is why their is also desires to put openstack
> into venv's for each service or do containerization around an openstack
> service. Their are tools out there that do exactly this (giftwrap,anvil)
> I am sure of some others that I am forgetting at this point.

Outch! :)
Feel free to do this way if you like, but I wont be the one who would
recommend upgrading 15 venv in a single machine when a security issue is
raised.

>> As much as I know, there's no automated tool that can do the manual work
>> of figuring out (build-)dependencies correctly. This has to be a manual
>> work, because of the fact one needs to check what version is currently
>> available on the current stable distribution and sometimes just avoid
>> version dependencies when possible.
> 
> Again - For me Anvil does this, other people have created giftwrap and I
> am sure their are some other projects that auto figure out dependencies
> builds them and packages them.  But with anvil over the past year I have
> built exactly 0 python dependent packages by hand. I have updated exactly
> 0 spec files for those updated dependancies as well. Anvil figures out
> the requirements across all the openstack projects you are building. It
> then, if need be, fixes conflicting version requirements (I am sure that
> you know about this in the past where one project wanted a version that
> conflicted with another project). Then it builds and packages the
> dependent packages, then the openstack specific packages. In the end I
> have two repos, the repo with just the core packages in it and then a
> dependency package repo.

Does Anvil understands that when we needed python-netaddr>=0.7.6, then
no version depenency was needed in the package, but since we now need
>=0.7.11 (which isn't in Wheezy), then the version must be there? Does
Anvil knows that writing python-pbr (>= 0.6) is fine, and that it
shouldn't care about the "pbr>=0.6,!=0.7,<1.0" declaration that is in
the requirements.txt? Did Anvil understood that Debian transitionned
from libvirt-bin to libvirt-daemon-system? About about websockify >= 0.6
with an added Debian specific patch to support that version so that we
don't have the zombie process issue? How does Anvil detects which unit
test system a given Python module is using? Does it fixes distribution
specific issues with these unit tests by itself? Or does Anvil just skip
any kind of unit tests in Python modules?

>> Doing automated packaging, while being a good idea at first, doesn't
>> work in practice. Every package has specific things, like support or not
>> for Python3 (sometimes, a few patches are needed to make it work in
>> Python 3, and I do lots of them), and different ways to run the unit
>> tests. Also, for some packages, you also need to package its dependency
>> (sometimes, it is a recursive work, and for a single Python module, you
>> may end-up packaging 10 other modules, sometimes just to be able to run
>> the unit tests). So basically: I don't think this approach works.
> 
> So far 1+ year, 3 versions of openstack, and a distro change tell me that
> you are wrong because it has worked for *ME*.

Ok, good for *YOU* then... :)

>> Runtime dependencies are needed to run the unit tests, so yes, they get
>> installed, since the unit tests are run at package build time. On top of
>> that, you have specific build-dependencies for running the unit tests,
>> and possibly other things (like the config file generation which we're
>> talking about right now, or the sphinx docs, for example).
> 
> We (me?) do testing/unit testing completely separate from packaging.  As I
> would rather run tests against the system as I am deploying it vs's build
> time. Right/wrong/other wise that¹s how I am doing it.

Running unit tests at build time makes it possible to ensure that we
have a correct set of dependencies. When running them with things
installed together with them (eg: not on a throw-able chroot), then
maybe some other stuff will be automatically installed, and you may miss
a dependency.

>>> Getting to the point that what tox installs and what are available on
>>> the
>>> system are different. So the only way to get a valid config is to either
>>> use the same package versions that tox does or to duplicate what "tox"
>>> is
>>> suppose to do for you.
>>
>> I'm sorry, but I have to say that this is incorrect. What you need to
>> have installed, is the exact same environment where you package will
>> run, not what tox wants. Which is why you should avoid at all costs
>> running some pip install things, because doing so, you may end up having
>> a wrong .conf.sample file (which will not match the Python modules that
>> your package will run on).
> 
> Sounds to me like we agreed.  You are installing all the dependent python
> modules on the build server - then generating the config.  That's also
> what tox does, except it does it in a venv and doesn't care about what
> system level versions you have.  So you are "duplicating" what tox is
> doing for you.  I get the end result is different you get a config file
> that exactly matches what versions of modules you are packaging and with
> tox the end result may not be usable.

>From the distribution point of view, using something like "pip install"
at build time is simply not acceptable anyway. And no, I'm not
duplicating what tox does, this is also a needed work to make sure that
dependencies (and their versions) are written correctly (see above).

>> That's not the only reason. The other is that, by policy, you should
>> *never ever* need an internet connection to build a package. This is a
>> strong Debian policy requirement, which I both agree on and support on
>> the technical level for all of the packages I produce. If there is a
>> package for which that's not the case, then this is a bug which shall be
>> reported on the Debian bug tracker.
> 
> Again - I don't have this restriction and I guess because of it - building
> packages and dependent packages is apparently much easier for me.

You're also producing something of a really lower quality doing this
way. If you don't mind, then ok... (and I wont go into details why it is
of a lower quality, I already wrote about it, and I just hope you
understand why I'm saying it: it seems you do but you're satisfied with
the result so it's ok...)

> Honestly - I have no preference on this.  I am going to change the
> defaults anyway to customize it to something that works for me.  So either
> way I need to modify the config file and the configuration management
> stuff does this for me.  So if people want to put extra work into making a
> config file that they think is going to work well with everyone please do
> so, to me I think that¹s a dead end.  What I would rather see is feedback
> from operators to dev to get some defaults changed to more reasonable
> values.  I am specifically thinking about certain periodic tasks that
> everyone after a certain size is either changing or turning off.

I'd love to see this kind of change pushed upstream, so that everyone
has the benefits of it.

Cheers,

Thomas Goirand (zigo)




More information about the OpenStack-operators mailing list