<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Tue, May 15, 2018 at 11:42 AM Jeremy Stanley <<a href="mailto:fungi@yuggoth.org">fungi@yuggoth.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 2018-05-15 17:31:07 +0200 (+0200), Bogdan Dobrelya wrote:<br>
[...]<br>
> * upload into a swift container, with an automatic expiration set, the<br>
> de-duplicated and compressed tarball created with something like:<br>
> # docker save $(docker images -q) | gzip -1 > all.tar.xz<br>
> (I expect it will be something like a 2G file)<br>
> * something similar for DLRN repos prolly, I'm not an expert for this part.<br>
> <br>
> Then those stored artifacts to be picked up by the next step in the graph,<br>
> deploying undercloud and overcloud in the single step, like:<br>
> * fetch the swift containers with repos and container images<br>
[...]<br>
<br>
I do worry a little about network fragility here, as well as<br>
extremely variable performance. Randomly-selected job nodes could be<br>
shuffling those files halfway across the globe so either upload or<br>
download (or both) will experience high round-trip latency as well<br>
as potentially constrained throughput, packet loss,<br>
disconnects/interruptions and so on... all the things we deal with<br>
when trying to rely on the Internet, except magnified by the<br>
quantity of data being transferred about.<br>
<br>
Ultimately still worth trying, I think, but just keep in mind it may<br>
introduce more issues than it solves.<br>
-- <br>
Jeremy Stanley<br></blockquote><div><br></div><div>Question... If we were to build or update the containers that need an update and I'm assuming the overcloud images here as well as a parent job.</div><div><br></div><div>The content would then sync to a swift file server on a central point for ALL the openstack providers or it would be sync'd to each cloud?</div><div><br></div><div>Not to throw too much cold water on the idea, but...</div><div>I wonder if the time to upload and download the containers and images would significantly reduce any advantage this process has.</div><div><br></div><div>Although centralizing the container updates and images on a per check job basis sounds attractive, I get the sense we need to be very careful and fully vett the idea. At the moment it's also an optimization ( maybe ) so I don't see this as a very high priority atm.</div><div><br></div><div>Let's bring the discussion the tripleo meeting next week. Thanks all!</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div></div>