[openstack-dev] [midonet] Split up python-midonetclient
Galo Navarro
galo at midokura.com
Thu Dec 10 07:46:33 UTC 2015
On 10 December 2015 at 04:35, Sandro Mathys <sandro at midokura.com> wrote:
> On Thu, Dec 10, 2015 at 12:48 AM, Galo Navarro <galo at midokura.com> wrote:
> > Hi,
> >
> >> I think the goal of this split is well explained by Sandro in the first
> >> mails of the chain:
> >>
> >> 1. Downstream packaging
> >> 2. Tagging the delivery properly as a library
> >> 3. Adding as a project on pypi
> >
> > Not really, because (1) and (2) are *a consequence* of the repo split.
> Not a
> > cause. Please correct me if I'm reading wrong but he's saying:
> >
> > - I want tarballs
> > - To produce tarballs, I want a separate repo, and separate repos have
> (1),
> > (2) as requirements.
>
> No, they're all goals, no consequences. Sorry, I didn't notice it
> could be interpreted differently
>
I beg to disagree. The location of code is not a goal in itself. Producing
artifacts such as tarballs is.
> > This looks more accurate: you're actually not asking for a tarball.
> You're
> > asking for being compatible with a system that produces tarballs off a
> repo.
> > This is very different :)
> >
> > So questions: we have a standalone mirror of the repo, that could be used
> > for this purpose. Say we move the mirror to OSt infra, would things work?
>
> Good point. Actually, no. The mirror can't go into OSt infra as they
> don't allow direct pushes to repos - they need to go through reviews.
> Of course, we could still have a mirror on GitHub in midonet/ but that
> might cause us a lot of trouble.
>
I don't follow. Where a repo is hosted is orthogonal to how commits are
added. If commits to the mirror must go via gerrit, this is perfectly
doable.
> > But create a lot of other problems in development. With a very important
> > difference: the pain created by the mirror solution is solved cheaply
> with
> > software (e.g.: as you know, with a script). OTOH, the pain created by
> > splitting the repo is paid in very costly human resources.
>
> Adding the PMC as a submodule should reduce this costs significantly,
> no? Of course, when working on the PMC, sometimes (or often, even)
>
there will be the need for two instead of one review requests but the
> content and discussion of those should be nearly identical, so the
> actual overhead is fairly small. Figure I'm missing a few things here
> - what other pains would this add?
>
No, it doesn't make things easier. We already tried.
Guillermo explained a few reasons already in his email.
> > I do get this point and it's a major concern, IMO we should split to a
> > different conversation as it's not related to where PYC lives, but to a
> more
> > general question: do we really need a repo per package?
>
> No, we don't. Not per package as you outlined them earlier: agent,
> cluster, etc.
>
> Like Jaume, I know the RPM side much better than the DEB side. So for
> RPM, one source package (srpm) can create several binary packages
> (rpm). Therfore, one repo/tarball (there's an expected 1:1 relation
> between these two) can be used for several packages.
>
> But there's different policies for services and clients, e.g. the
> services are only packaged for servers but the clients both for
> servers and workstations. Therefore, they are kept in separate srpms.
>
> Additionally, it's much easier to maintain java and python code in
> separate srpms/rpms - mostly due to (build) dependencies.
>
What's your rationale for saying this? Could you point at specific
maintenance points that are made easier by having different languages in
separate repos?
g
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151210/342602a0/attachment.html>
More information about the OpenStack-dev
mailing list