[openstack-dev] Jenkins and patch approval

James E. Blair jeblair at openstack.org
Wed Feb 20 00:10:19 UTC 2013


Monty Taylor <mordred at inaugust.com> writes:

> On 02/19/2013 08:22 AM, Daniel P. Berrange wrote:
>> On Tue, Feb 19, 2013 at 05:15:31PM +0100, Thierry Carrez wrote:
>>> Daniel P. Berrange wrote:
>>>> On Tue, Feb 19, 2013 at 04:59:30PM +0100, Thierry Carrez wrote:
>>>>> Gary Kotton wrote:
>>>>>> On 02/19/2013 04:14 PM, Thierry Carrez wrote:
>>>>>>> Gary Kotton wrote:
>>>>>>>> A number of patches approved today seem to be in a state of limbo.
>>>>>>>> Anyone have any idea?
>>>>>>> Which ones ? In limbo waiting for what ? Tests ? Merges ?
>>>>>>> I've kept an eye on the queue and it seems to be pretty active...
>>>>>>
>>>>>> Humble apologies. There were a number from the quantum client:
>>>>>>     https://review.openstack.org/#/c/21064/
>>>>>>
>>>>>> Rest have gone through - took about 5 hours...
>>>>>
>>>>> FWIW, most tests are failing right now due to some PyPI-related issue.
>>>>> Hopefully we can get that sorted out before it breaks our feature freeze
>>>>> day completely :)
>>>>
>>>> It would be nice to take 3rd party servers out of the loop entirely
>>>> for our build/test processes. Is there any existing RFE to setup a
>>>> mirror of the pypi.python.org content that OpenStack uses, and point
>>>> gerrit at that instead, so we're isolated from all potential python
>>>> infrastructure problems & have everything under our direct control ?
>>>
>>> That's what we do AFAICT. Apparently it's just that some component
>>> insisted to get something directly from PyPI... the issue flew under the
>>> radar until exposed by PyPI failures.
>
> Yup. In fact, the amount of fail the project would see if we _did_
> regularly hit external machines would be massive. It's the same reason
> we get touchy anytime anyone tries to put github fetches in to something. :)
>
>> If anyone has any info on how this is done, I'd be interested to know
>> what approach OpenStack CI is using for this. I'd like to figure out
>> how I can get something similar for personal developer infrastructure
>> environments avoid them hitting pypi too.
>
> We have plenty of info on how it's done! We have a couple of scripts in
> the jeepyb project (which is pip installable, but also can be found at
> https://github.com/openstack-infra/jeepyb) One of them clones all of the
> projects that are listed in a file, looks in all of their branches for
> requirements.txt or tools/pip-requires or tools/test-requires, and then
> runs a pip download on those to populate a pip download cache.
>
> Then we have different script that can turn a pip download cache into a
> pypi structure, which we serve as static files via apache.
>
> Once that's there, we configure ~/.pip/pip.conf to point to the static
> partial pypi.
>
> The above (and everything else we do) is all in:
>
> https://github.com/openstack-infra/config/
>
> and a decent amount of it is documented at http://ci.openstack.org
>
> Specifically, for this, you'd be interested in:
>
> https://github.com/openstack-infra/config/tree/master/modules/pypimirror
>
> We're still working on a good fix for the underlying problem, which is
> that the design of pypi index lookups is TERRIBLE.

In addition to what Monty wrote above, we're waiting on the completion
of the openstack/requirements project (centralized list of openstack
requirements) to be active before we can really cut the cord.  Once new
requirements go through that repo, we can build our mirror strictly from
that, which means that all jobs _other_ than adding a new requirement
there can be configured not to touch pypi.python.org at all.

If pypi fails again like it did this morning, then we will immediately
implement that (because that disruption will be less than the pypi
disruption).  Otherwise, we wouldn't make a change like that this week
(and hopefully we won't have to).

-Jim



More information about the OpenStack-dev mailing list