[openstack-dev] [tc][python-clients] More freedom for all python clients

Ben Nemec openstack at nemebean.com
Mon Jan 26 20:49:53 UTC 2015


On 01/26/2015 02:29 PM, Robert Collins wrote:
> TripleO has done per service venvs for a couple years now, and it
> doesn't solve the fragility issue that our unbounded deps cause. It
> avoids most but not all conflicting deps within OpenStack, and none of
> the 'upstream broke us' cases.

Note that a lot of us (our CI included) are no longer running with
separate venvs due to the time it takes to build them.

See common-venv from
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_gate_test.sh#n21

Also, I don't see how this could work with distro packages (I'm not
saying there isn't a way, but I don't know what it would be), so I think
that's something I would definitely want an answer on before we decide
this is the way forward.

-Ben

> 
> -Rob
> 
> On 27 January 2015 at 09:01, Joe Gordon <joe.gordon0 at gmail.com> wrote:
>>
>>
>> On Wed, Jan 21, 2015 at 5:03 AM, Sean Dague <sean at dague.net> wrote:
>>>
>>> On 01/20/2015 08:15 PM, Robert Collins wrote:
>>>> On 21 January 2015 at 10:21, Clark Boylan <cboylan at sapwetik.org> wrote:
>>>> ...
>>>>> This ml thread came up in the TC meeting today and I am responding here
>>>>> to catch the thread up with the meeting. The soft update option is the
>>>>> suggested fix for non openstack projects that want to have most of
>>>>> their
>>>>> requirements managed by global requirements.
>>>>>
>>>>> For the project structure reform opening things up we should consider
>>>>> loosening the criteria to get on the list and make it primarily based
>>>>> on
>>>>> technical criteria such as py3k support, license compatibility,
>>>>> upstream
>>>>> support/activity, and so on (basically the current criteria with less
>>>>> of
>>>>> a focus on where the project comes from if it is otherwise healthy).
>>>>> Then individual projects would choose the subset they need to depend
>>>>> on.
>>>>> This model should be viable with different domains as well if we go
>>>>> that
>>>>> route.
>>>>>
>>>>> The following is not from the TC meeting but addressing other portions
>>>>> of this conversation:
>>>>>
>>>>> At least one concern with this option is that as the number of total
>>>>> requirements goes up is the difficulty in debugging installation
>>>>> conflicts becomes more difficult too. I have suggested that we could
>>>>> write tools to help with this. Install bisection based on pip logs for
>>>>> example, but these tools are still theoretical so I may be
>>>>> overestimating their usefulness.
>>>>>
>>>>> To address the community scaling aspect I think you push a lot of work
>>>>> back on deployers/users if we don't curate requirements for anything
>>>>> that ends up tagged as "production ready" (or whatever the equivalent
>>>>> tag becomes). Essentially we are saying "this doesn't scale for us so
>>>>> now you deal with the fallout. Have fun", which isn't very friendly to
>>>>> people consuming the software. We already have an absurd number of
>>>>> requirements and management of them has appeared to scale. I don't
>>>>> foresee my workload going up if we open up the list as suggested.
>>>>
>>>> Perhaps I missed something, but the initial request wasn't about
>>>> random packages, it was about other stackforge clients - these are
>>>> things in the ecosystem! I'm glad we have technical solutions, but it
>>>> just seems odd to me that adding them would ever have been
>>>> controversial.
>>>
>>> Well, I think Clark and I have different opinions of how much of a pain
>>> unwinding the requirements are, and how long these tend to leave the
>>> gate broken. I am happy to also put it in a "somebody elses problem
>>> field" for resolving the issues. :)
>>>
>>> Honestly, I think we're actually at a different point, where we need to
>>> stop assuming that the sane way to deal with python is to install it
>>> into system libraries, and just put every service in a venv and get rid
>>> of global requirements entirely. Global requirements was a scaling fix
>>> for getting to 10 coexisting projects. I don't think it actually works
>>> well with 50 ecosystem projects. Which is why I proposed the domains
>>> solution instead.
>>>
>>
>> ++ using per service virtual environments would help us avoid a whole class
>> of nasty issues. On the flip side doing this makes things harder for distros
>> to find a set of non-conflicting dependencies etc.
>>
>>>
>>>> On the pip solver side, joe gordon was working on a thing to install a
>>>> fixed set of packages by bypassing the pip resolver... not sure how
>>>> thats progressing.
>>>
>>> I think if we are talking seriously about bypassing the pip resolver, we
>>> should step back and think about that fact. Because now we're producting
>>> a custom installation process that will produce an answer for us, which
>>> is completely different than any answer that anyone else is getting for
>>> how to get a coherent system.
>>
>>
>> Fully agreed, I am looking into avoiding pips dependency solver for stable
>> branches only right now. But using per service venvs would be even better.
>>
>>>
>>>
>>>         -Sean
>>>
>>> --
>>> Sean Dague
>>> http://dague.net
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 




More information about the OpenStack-dev mailing list