[Openstack] Issues with Packaging and a Proposal

Soren Hansen soren at openstack.org
Thu Aug 25 10:38:25 UTC 2011


2011/8/24 Monty Taylor <mordred at inaugust.com>:
> On 08/24/2011 01:11 PM, Soren Hansen wrote:
>> 2011/8/24 Monty Taylor <mordred at inaugust.com>:
> Also, I think that our definitions of some things are different here.
> I'm not talking about reference platform for inclusion upstream. I'm
> not talking about "I'm on Ubuntu Oneiric and I want to install
> OpenStack, so I'm going to apt-get install openstack from the main
> Ubuntu repository" I imagine it will be literally YEARS before anyone
> does that.

What makes you say that?

> I expect that people running OpenStack will, as step one, add the
> OpenStack repo, or the OpenStack PPA, or wherever we publish our
> latest and greatest packages. We released Cactus 6 months ago, and we
> did it in such a way as to get it included into Ubuntu - and less than
> a month later the general consensus is "cactus is unusable"

That doesn't mean that using a linux distro (not a PPA, an actual
distro) as a distrubtion channel is the wrong thing to do. It just means
that we don't trust our own code quality very much. When we released
Cactus, we were reasonably happy with it. After working on Diablo for a
little bit, we realised how many bugs we'd fixed, so if people came and
asked for help, it was entirely likely that their problems were solved
in Diablo, so we asked them to try that.

> So I'm talking about repository lifecycle from an upstream perspective
> with the assumption that I will be maintaining a repository from which
> my consumers can run OpenStack.

As Mark points out, this really is the job of distributions. We
shouldn't worry too much about it. Sure, if we want to provide bleeding
edge packages for people who want to help the testing efforts, that's
great, but setting ourselves up as a distribution channel for releases
is a major can of worms.

> Futher, I am assuming that some of them will want to run current
> OpenStack on Lucid. (hi Rackspace Cloud Files) or on Debian (hi
> upcoming Rackspace Cloud Servers) or RHEL6. As upstreams, it is
> incumbent on us to provide a complete experience for that.

I disagree. We write the upstream software. If someone has an interest
in a particular version of a particular distro, they should do the work
of providing the complete experience for that.

>>> - Lack of collaboration. Related to the above, but slightly different.
>>> Since the Rackspace internal deployment team have to maintain their own
>>> packages, it means they are not using the published packages and we are
>>> not benefiting from their real-world deployment experience.
>>> Additionally, the Cloudbuilders consulting team doesn't use our
>>> published packages, and similarly we aren't really benefiting from them
>>> directly working with us on packaging.
>>
>> Rackspace isn't doing their own packaging because of (lack of) Debian
>> support. If they did, they'd have realised almost immediately that the
>> packages actually build on Debian. They're doing it because there'll
>> supposedly be differences in the behaviour Ubuntu (or any other distro
>> for that matter) will want and what Rackspace will want from the
>> packages. I've becried this divergence countless times, but to no
>> avail.
> AFAIK, the packages do _not_ build on debian, nor have they for a while.

Because of dependencies! Our packages (ie the stuff we actually control)
have Debian support. Debian just does not have OpenStack support, so to
speak.

>>> - PPAs are Async. This makes integration testing harder.
>>
>> PPAs were chosen because
>> a) they are almost exactly identical to the builders for Ubuntu
>> proper, so the on-commit-to-trunk builds serve as way to know
>> immediately if we introduce a change that will break when uploaded to
>> Ubuntu proper. So, they're the closest we'll get to integration tests
>> for real Ubuntu builds,
>> b) we don't have to maintain them, and
> This was a concern when they were chosen. Now? Not a big deal. We
> maintain lots of things at this point.

Indeed we do. I don't know about you, but I think we're plenty busy as
it is. Maintaining build daemons properly is a non-trivial amount of
work.

> (don't get me started on the broken 2.6.6-3~ thing)

I'll probably regret this, but what's the problem?

>>> If it's building them locally, and that's what's being tested, why
>>> wouldn't that be what's published?
>> Because of c) above and because PPAs are trustworthy. There's no way
>> you can sneak extra packages into the chroot or otherwise compromise
>> its integrity, there's no way you can reach the internet from the
>> builders (and vice versa), and they're built in a trusted environment
>> (run by people who are used to managing this exact sort of
>> infrastructure).
> This is not unique to PPAs. Our repo can be just as trustworthy. In
> fact, we're shipping software which runs large portions of your data
> center, so if you don't trust us to make it, well, you're sort of
> screwed.

Just because we're good at writing OpenStack doesn't mean we're good at
maintaining Debian-style build infrastructure. They're very different
things. I trust Canonical's buildd admins and maintainers much more than
you or me to run these things. Sorry if you find that offensive, but
that's the way I feel.

> Same as our current unit-test runners ... the system is locked down,
> the dev can't sneak stuff in.

No, but I can. Do you trust me? :) Two parts that factor into
trustworthiness that we (at least currently) don't offer:

a) Reproducability due to lack of connection to the outside.  The PPA
builds are completely cut off from the internet. This ensures there are
no side effects caused by builds connecting out (expectedly or not) or
stuff listening for connections that someone then connects to. We can
reasonably simply offer this, but I don't believe we currently do.

b) If anyone has doubts about whether this or the other side effect
could come into play or whether a supposed attack vector is realistic,
they can test it on the PPA builders. Just throw a build at it and see
if it does what you expect it to. So it's reasonably simple for third
parties to do some black-box sort of auditing.

> BTW - the "can't reach the internet from the builders" is only really a
> concern in a project like Debian and Ubuntu where you have random people
> uploading random un-reviewed things.

How do you figure that? As you pointed out yourself, the Sphinx build
tried to connect to the Internet. I don't know exactly what it was
trying to fetch, but it seems obvious that whatever it does fetch
probably factors into the build in one way or another. If what it gets
back is broken (or worse: compromised) that might break our stuff as
well.

> As I mentioned earlier (and this comes from someone with a great deal
> of love for Ubuntu) Ubuntu's goal and needs are not necessarily ours.
> Ubuntu devs are primarily focused on release+1 (because that's their
> job) WE are primarily focused on release, release-1 and last-LTS.

Are we?

> This makes things like 2.6.6-3~ thing have very different emphasis and
> meaning depending on which side of the fence you stand.

Ok, you really have to elaborate on this now :) I presume you're
referring to something about dh_python2?

>> I don't understand.  a) Building a chroot and setting up sbuild is a
>> matter of installing one pacakge and running one command,
> Still doesn't get you exact same as a PPA builder. The PPA buider has
> networking turned off, which confused the sphinx build a while back
> until we figured out what was going on. The PPA build also has not
> only the packages inside of itself, but also other build-dep repos it
> can use. Making sure you get a chroot with the right repos takes a
> decent amount of knowledge about launchpad and ppas, actually.

And setting up another sort of build chroot requires other bits of
knowledge, surely?

>> b) further up you argued that building packages ourselves is pretty
>> much identical to building in a PPA and further down you say you have
>> puppet magic to create a build environment for devs. Those two things
>> put together sure sound to me like they can get infinitesimally close
>> to the same sort of environment as you see in Launchpad,
> Yes. Same sort of env.

..yet you say "[...]those build errors are not reproducible by devs
locally, [...]"?

> More importantly, though, is the fact that all of the build output in
> the Launchpad PPA is hidden.
>
> Didja know that Glance hasn't built for Maverick or Lucid in a bit?

Yes. I get an e-mail about it every time. I probably know within a
second of the build completing.

> That python-novaclient failed all of its latest PPA builds?

Yup. I have the e-mails. I even mark them as important, so I see them
instantly.

>>> - Murky backported deps. Some of the backported deps in the PPAs
>>> came from a dir on my laptop. Some are from Theirry's laptop. Some
>>> are from Soren's laptop. There is neither documentation on how they
>>> got there, how we decided to put them there, how we built them or
>>> how to continue to collaborate on them.
>> Sorry, what? They're all built in PPAs. That's about as clean-room as
>> it gets. Everyone is free to go and see and inspect them. The diff
>> between that version upon which the backport is based and what got
>> uploaded is trivially easy to extract. Documenting how it's done
>> sounds pointless. Anyone who's familiar with Ubuntu/Debian packaging
>> will find no surprises. If people want to look at OpenStack code,
>> it's not our job to document what Python is all about. If people want
>> to see how I backported something, it's not my job to teach them
>> packaging first.
> nono. Not how they're built - where the source tree is that you (or I)
> built the source package from that is built in the builders that
> aren't controlled by the Jenkins that controls everything else we do.
>
> If you (or I) do a one-off backport, we then upload a source package and
> then, often, don't shove the code anywhere. That's because a lot of the
> time it's easiest just to do dget, hack version, debuild -S, dput.

I can fix this right now:

Any backport I've done is not in version control. I could have put it in
version control, but it would contain no information that isn't
otherwise readily available.

I don't intend to keep working on any of them until such time that we
need an even newer version of whatever it is.

> Part of the problem here is that we have an institutional process
> which depends on a special quality of one of the OpenStack devs
> (namely, the ability of you or ttx to upload directly in to Ubuntu)
> That's FANTASTIC, but it can't really be the basis of ongoing
> OpenStack process, as it is not a process that someone else can just
> pick up.

We also have a healthy relationship with the Ubuntu server team and with
Canonical. They've committed to supporting OpenStack. Should I get hit
by a bus, I fully believe they'd pick up right where I left off. This is
not a special ability only Rick, ttx and myself have. We just got the
ball rolling and built the relationship. None of us are required for
this to continue to work.

> The point of a good CI system is to head that off at the pass. Nothing
> about our current process should allow a bug in Swift to bring DEV to
> a screeching halt, but if it causes integration tests to bork, well,
> then people probably want to know about it, no?

Have we moved away from the intent to have integration tests be able to
block code from landing?

>> b) If someone wants to help test the bleeding edge of Swift, but
>> isn't really interested in whatever breakage we introduce in Nova,
>> having separate PPA's makes it straightforward to just test one of
>> them.
> I've heard this as a theoretical several times, but I haven't heard
> the actual call for "I REALLY want to install bleeding edge swift,
> with stable nova" on the same machine. In fact, the chances of running
> both swift and nova on the same box in a real environment are pretty
> low.

Do you think the chances of running either of swift or nova *from trunk*
in a *real environemtn* are very high?

>> For final releases, we put everything in the same PPA. I've written
>> tools to keep track of divergence between the PPA's and to bring them
>> in sync so the final consolidation doesn't hold any surprises.
>> Putting everything in the same PPA is a trivial change, though.
> Where are these tools?

lp:~openstack-release/ubuntu-archive-tools/openstack/

I blogged about them a couple of months ago:

   http://blog.linux2go.dk/2011/06/17/ppa-management-tools/


> Where do they run? When?

You run them manually. They require you to make decisions and verify
stuff, so they can't run automatically.

> From what source do they bring PPAs in sync?

Each other. One of the tools in there can look at a number of PPA's at
the same time and tell you if there's version skew (if, say,
nova-core/trunk has webob 0.9.8, but swift-core/trunk only has webob
0.9.7).

> How do they decide which version of eventlet needs to live in the nova
> and the swift PPA?

They pick the newest.


-- 
Soren Hansen
Ubuntu Developer    http://www.ubuntu.com/
OpenStack Developer http://www.openstack.org/




More information about the Openstack mailing list