<!DOCTYPE html>
<html>
<head>
<title></title>
</head>
<body><div> </div>
<div>On Sun, Jul 13, 2014, at 02:36 PM, James Polley wrote:<br></div>
<blockquote type="cite"><div dir="ltr"><div><div><blockquote style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div> </div>
</div>
<div>We're also thinking about how we continue to offer the pre-built wheels<br></div>
<div>
for each of our build platforms. For infra, what I'm thinking is:<br></div>
<div> </div>
<div>
On each mirror slave (We have one for each OS combo we use), do<br></div>
<div>
something similar to:<br></div>
<div> </div>
<div>
pip wheel -r global-requirements.txt<br></div>
<div>
rsync $wheelhouse <a href="http://pypi.openstack.org/$(lsb_release)" target="_blank">pypi.openstack.org/$(lsb_release)</a><br></div>
<div> </div>
<div>
This may require keeping pypi-mirror and using an option to only do<br></div>
<div>
wheel building so that we can get the directory publication split. Ok. I<br></div>
<div>
got bored and wrote that:<br></div>
<div> </div>
<div><a href="https://review.openstack.org/106638" target="_blank">https://review.openstack.org/106638</a><br></div>
<div> </div>
<div>
So if we land that, you can do;<br></div>
<div> </div>
<div>
pip wheel -r global-requirements.txt<br></div>
<div>
run-mirror --wheels-only --wheelhouse=wheelhouse --wheeldest=mirror<br></div>
<div>
rsync -avz mirror pypi.openstack.org:/srv/mirror<br></div>
<div> </div>
<div>
If we went the devpi route, we could do;<br></div>
<div> </div>
<div>
pip wheel -r global-requirements.txt<br></div>
<div>
for pkg in $wheelhouse; do<br></div>
<div>
devpi upload $pkg<br></div>
<div>
done<br></div>
<div> </div>
<div>
And put that into a cron.<br></div>
</blockquote><div> </div>
<div>Obviously "keeping pypi-mirror" would require the least amount of change to how we suggest developers set up their systems.<br></div>
<div> </div>
<div>I think the devpi option seems fairly reasonable too. It looks like it's easier (and faster, and less bandwidth-consuming) than setting up bandersnatch or apt-mirror, which we currently suggest people consider. It doesn't look any more heavyweight than having a squid proxy for caching, which we currently suggest as a bare minimum.<br></div>
<div> </div>
<div>For an individual dev testing their own setup, I think we need a slightly different approach from the infra approach listed above though. I'm assuming that it's possible to probe the package index to determine if a wheel is available for a particular version of a package yet. If that's the case, we should be able to tweak tools like os-svc-install to notice when no wheel is available, and build and upload the wheel.<br></div>
<div> </div>
<div>I think this should give us a good balance between making sure that each build (except the first) uses wheels to save time, still gets the latest packages (since the last time the system was online at least), and the user doesn't need to remember to manually update the wheels when they're online.<br></div>
</div>
</div>
</div>
</blockquote><div> </div>
<div>This gave me an idea:<br></div>
<div>There was talk about pip being able to use a wheel cache (wheelhouse). Can we bind-mount an arch-specific wheelhouse from the hypervisor into our chroots as we build? This would let people get most of the wheel speedup while doing almost no specifal configuration.<br></div>
<div> </div>
<div>-Greg<br></div>
</body>
</html>