<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Nov 17, 2017 at 4:43 AM, Steven Hardy <span dir="ltr"><<a href="mailto:shardy@redhat.com" target="_blank">shardy@redhat.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
In the ansible/kubernetes model, it could work like:<br>
<br>
1. Ansible role makes k8s API call creating pod with multiple containers<br>
2. Pod starts temporary container that runs puppet, config files<br>
written out to shared volume<br>
3. Service container starts, config consumed from shared volume<br>
4. Optionally run temporary bootstrapping container inside pod<br>
<br>
This sort of pattern is documented here:<br>
<br>
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="noreferrer" target="_blank">https://kubernetes.io/docs/<wbr>tasks/access-application-<wbr>cluster/communicate-<wbr>containers-same-pod-shared-<wbr>volume/</a><br>
<br><div class="HOEnZb"><div class="h5"><br></div></div></blockquote><div><br></div><div>Regarding the use of the shared volume I agree this is a nice iteration. We considered using it within Pike as well but due to the Hybrid nature of the deployment, and the desire to have config files easily debug friendly on the host itself we ended up not going there.</div><div><br></div><div>In Queens however we are aiming for more or less full containerization so we could consider the merits of this approach again. Just pointing out that I don't think Kubernetes is a requirement in order to be able to proceed with some of this improvement.</div><div><br></div><div>DanĀ </div></div><br></div></div>