[Openstack] Issues with Packaging and a Proposal

Thomas Goirand thomas at goirand.fr
Thu Aug 25 15:21:53 UTC 2011


On 08/25/2011 06:38 PM, Soren Hansen wrote:
> That doesn't mean that using a linux distro (not a PPA, an actual
> distro) as a distrubtion channel is the wrong thing to do. It just means
> that we don't trust our own code quality very much. When we released
> Cactus, we were reasonably happy with it. After working on Diablo for a
> little bit, we realised how many bugs we'd fixed, so if people came and
> asked for help, it was entirely likely that their problems were solved
> in Diablo, so we asked them to try that.

Why fixes weren't backported to Cactus then? Sorry, I don't know Ubuntu
enough. But in Debian, we have point releases for that (and in the mean
while, stable-proposed-updates), isn't there something like that in
Ubuntu for the current stable? (I honestly don't know)

>> (don't get me started on the broken 2.6.6-3~ thing)
> 
> I'll probably regret this, but what's the problem?

I didn't understand what even is the 2.6.6-3~. Please explain!

>> This is not unique to PPAs. Our repo can be just as trustworthy. In
>> fact, we're shipping software which runs large portions of your data
>> center, so if you don't trust us to make it, well, you're sort of
>> screwed.
> 
> Just because we're good at writing OpenStack doesn't mean we're good at
> maintaining Debian-style build infrastructure. They're very different
> things. I trust Canonical's buildd admins and maintainers much more than
> you or me to run these things. Sorry if you find that offensive, but
> that's the way I feel.

I believe that if we need help from the Debian community to maintain
some buildd for OpenStack, it may be found. I can ask the DSA about it.
The only thing that might then be needed would be the sponsoring of the
hardware. If Rackspace / Dell / Intel provides it, I think it wont be an
issue to find people to help maintaining buildd. At least it doesn't
hurt to ask.

>> BTW - the "can't reach the internet from the builders" is only really a
>> concern in a project like Debian and Ubuntu where you have random people
>> uploading random un-reviewed things.
> 
> How do you figure that? As you pointed out yourself, the Sphinx build
> tried to connect to the Internet. I don't know exactly what it was
> trying to fetch, but it seems obvious that whatever it does fetch
> probably factors into the build in one way or another. If what it gets
> back is broken (or worse: compromised) that might break our stuff as
> well.

I agree with Soren here. Having buildd cut-off from the internet ensure
that a given package isn't trying to access the outside. It *must* not,
and we have to make sure of it, since it is a quite common mistake. But
is that a hard thing to do? (sorry, I never tried building a buildd, I
don't know what it involves...)

>> No. After Cactus was released, it didn't build because of issues in
>> python-eventlet related to the OpenSSL transition.
> Yes. *Because of issues in python-eventlet*. Forking *the packaging*
> because of lack of Debian support when the real problem is with the
> state of OpenStack's dependencies isn't going to help.

The issue with python-eventlet was because due to some removal of some
functions in OpenSSL, if I remember correctly, which made
python-eventlet not being able to build during the transition.
Controling python-eventlet wouldn't have change the issue with OpenSSL
which was due to licensing issues. Ubuntu was just not affected because
things are migrated from Debian at once from SID, but it's not an issue
about controlling packages here.

>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=634406
>
> Because of python-webob being out of date.

I've missed the reply from Jay Pipes... :(

Does it means that Cactus cannot build with webob 1.0.8+?

> Using backports (especially locally maintained ones) means that you're
> all alone in dealing with security or any sort of other issues. That's
> the sort of stuff that makes it hard for me to sleep. Clearly, I care
> too much.

Right, that's an issue. But it depends how much things are in the
backports. To some degree, it can be tolerable. What kind of libs are
you thinking about here?

Also, if running in backports.d.o, you are not alone, there's the Debian
security team to help, and most of the time the original maintainer.
That's of course not the case for a *private* backport repo.

>> I don't think that c) is valid (see above), and I don't think PPAs >>
are more trustworthy than a private repository with a well
>> maintained GPG key and a ${whatever}-archive-keyring package
>> published.  As long as the release file is signed, you have the same
>> security, IMHO. Please let me know if/why I'm wrong here.
>
> It's not just about gatekeeping and signing of Packages and Release
> files. It's also about trusting its reproducability.

Sorry, I don't understand here. Maintaining a private Debian repo by
itself is extremely simple, I don't get where you are seeing an issue.
The *only* advantage of a PPA is to have automated buildd for multiple
arch. But we only care about i386 and amd64, right?

>> Theoretically, you are right, at the exception that if there is a
>> strong technical disagreement with the maintainer of a given package,
>> then you can ask the technical committee to decide. But in reality,
>> most package maintainers will be very happy to have patches sent
>> against their package, and will accept them.
>
> I'm not saying whether any given maintainer will or will not accept
> whatever random patches we throw at them. I'm just saying they *can*,
> and I truly believe that they absolutely should be free to do so. I
> wouldn't dream of complaining to the tech board because a maintainer
> doesn't want to accept maintenance of a patch I wrote. This just makes
> it really difficult to rely on being able to get a patch accepted.

Please, can you focus on more concrete issues we might face? What
packages? BTW, getting the technical committee really is the very last
resort, and I really hope that it will never happen.

> And that's great! I'm part of the Debian Python Module Team as well.
> These efforts are excellent and I applaud them, but just like I can't
> rely on any given maintainer to accept my patches, I also can't rely
> on all our dependencies (current and future) to be maintained in such
> an open way.

Hang on here, are you talking about patches inside *upstream* code? If
so, see below.

> Let's take the Eventlet example. Had it been maintained by someone
> else, I'm far from convinced the most recent patch we added to it
> would be accepted, even if the maintainer was a very reasonable
> person. Why? Because upstream has expressed rather lukewarm feelings
> about the patch. In such a situation, it's common (and perfectly
> reasonable!) for Debian maintainers to say "no, thanks".

Here we have a real issue if upstream doesn't like our patch. Because
we'll have to constantly backport it for each new version of upstream.
The issue really is either upstream not accepting the patch, or the
patch itself not being good enough, and it's *not* a packaging issue.

Saying that Ubuntu is a better choice because you/ttx/$anyone can force
the patch to get in Ubuntu directly, is a dangerous argument, IMHO.
Because we'll have the issue in absolutely all the other distributions.
How will we ensure for example, that the patch will be in RHEL/Fedora?
And in Debian? Will we get in touch with all the maintainers in every
distribution, in the hope they apply the very patch that upstream
refused? Or maybe, we simply don't care about other distributions? That
goes on the opposite direction to what we want to acheive here, which is
multi-distribution support (if I understand correctly).

The solution, working on any $distribution, might well be to fork the
package entirely, and make it Conflicts:/Breaks: with the upstream one
(that is, if we have enough manpower to maintain it). Of course, talking
more with upstream, so that you have an agreement on what can be an
acceptable patch would be even better. That would be a solution for
every distributions at once.

I hope the goal isn't just to go faster, then forget about the issue.
While I can understand it on a developer's side so we can move forward
with a temporary solution while doing development, that's really too
much of a quick-and-dirty fix not maintainable in the long run in a
distribution release, and yet even less a long term solution for
OpenStack itself.

Cheers,

Thomas




More information about the Openstack mailing list