[openstack-dev] [midonet] Split up python-midonetclient
sandro at midokura.com
Thu Dec 10 03:35:49 UTC 2015
On Thu, Dec 10, 2015 at 12:48 AM, Galo Navarro <galo at midokura.com> wrote:
>> I think the goal of this split is well explained by Sandro in the first
>> mails of the chain:
>> 1. Downstream packaging
>> 2. Tagging the delivery properly as a library
>> 3. Adding as a project on pypi
> Not really, because (1) and (2) are *a consequence* of the repo split. Not a
> cause. Please correct me if I'm reading wrong but he's saying:
> - I want tarballs
> - To produce tarballs, I want a separate repo, and separate repos have (1),
> (2) as requirements.
No, they're all goals, no consequences. Sorry, I didn't notice it
could be interpreted differently
> So this is where I'm going: producing a tarball of pyc does *not* require a
> separate repo. If we don't need a new repo, we don't need to do all the
> things that a separate repo requires.
>> OpenStack provide us a tarballs web page for each branch of each
>> of the infrastructure.
>> Then, projects like Delorean can allow us to download theses tarball
>> branches, create the
>> packages and host them in a target repository for each one of the rpm-like
>> distributions. I am pretty sure
>> that there is something similar for Ubuntu.
> This looks more accurate: you're actually not asking for a tarball. You're
> asking for being compatible with a system that produces tarballs off a repo.
> This is very different :)
> So questions: we have a standalone mirror of the repo, that could be used
> for this purpose. Say we move the mirror to OSt infra, would things work?
Good point. Actually, no. The mirror can't go into OSt infra as they
don't allow direct pushes to repos - they need to go through reviews.
Of course, we could still have a mirror on GitHub in midonet/ but that
might cause us a lot of trouble.
>> Everything is done in a very straightforward and standarized way, because
>> every repo has its own
>> deliverable. You can look how they are packaged and you won't see too many
>> differences between
>> them. Packaging a python-midonetclient it will be trivial if it is
>> in a single repo. It will be
> But create a lot of other problems in development. With a very important
> difference: the pain created by the mirror solution is solved cheaply with
> software (e.g.: as you know, with a script). OTOH, the pain created by
> splitting the repo is paid in very costly human resources.
Adding the PMC as a submodule should reduce this costs significantly,
no? Of course, when working on the PMC, sometimes (or often, even)
there will be the need for two instead of one review requests but the
content and discussion of those should be nearly identical, so the
actual overhead is fairly small. Figure I'm missing a few things here
- what other pains would this add?
>> complicated and we'll have to do tricky things if it is a directory inside
>> the midonet repo. And I am not
>> sure if Ubuntu and RDO community will allow us to have weird packaging
>> metadata repos.
> I do get this point and it's a major concern, IMO we should split to a
> different conversation as it's not related to where PYC lives, but to a more
> general question: do we really need a repo per package?
No, we don't. Not per package as you outlined them earlier: agent, cluster, etc.
Like Jaume, I know the RPM side much better than the DEB side. So for
RPM, one source package (srpm) can create several binary packages
(rpm). Therfore, one repo/tarball (there's an expected 1:1 relation
between these two) can be used for several packages.
But there's different policies for services and clients, e.g. the
services are only packaged for servers but the clients both for
servers and workstations. Therefore, they are kept in separate srpms.
Additionally, it's much easier to maintain java and python code in
separate srpms/rpms - mostly due to (build) dependencies.
> Like Guillermo and myself said before, the midonet repo generate 4 packages,
> and this will grow. If having a package per repo is really a strong
> requirement, there is *a lot* of work ahead, so we need to start talking
> about this now. But like I said, it's orthogonal to the PYC points above.
It really shouldn't be necessary to split up agent, cluster, etc.
Unless maybe if they are _very_ loosely coupled and there's a case
where it makes _a lot_ of sense to operate different versions of each
component together over an extended period of time (e.g. not just to
upgrade one at a time), I guess. Added some emphasis to that sentence,
because just the possibility won't justify this - there must be a real
More information about the OpenStack-dev