On Mon, 2019-01-28 at 11:18 -0500, Jay Pipes wrote:
On 01/28/2019 11:00 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:58 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:43 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote: > Perhaps, we should come up with the first initial step of providing > a common way of building images (so a use can clone a repo and do > 'docker build .') which will eliminate the obligation of having to > deal with binaries, and then afterwards reconsider the ideal way of > shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build .
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
Yes, that's what LOCI can do, kinda. :) Technically there's some Makefile foo that iterates over projects to build images for, but it's essentially what it does.
Alternately, you don't even need to build locally. You can do:
docker build https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=keystone \ --tag keystone:ubuntu
IMHO, the real innovation that LOCI brings is the way that it builds wheel packages into an intermediary docker build container and then installs the service-specific Python code into a virtualenv inside the target project docker container after injecting the built wheels.
That, and LOCI made a good (IMHO) decision to just focus on building the images and not deploying those images (using Ansible, Puppet, Chef, k8s, whatever). They kept the deployment concerns separate, which is a great decision since deployment tools are a complete dumpster fire (all of them).
Thanks for that, I didn't know about this, I'll do some more reading about LOCI and it how it goes about doing this.
Thanks Jay.
No problem. Also a good thing to keep in mind is that kolla-ansible is able to deploy LOCI images, AFAIK, instead of the "normal" Kolla images. I have not tried this myself, however, so perhaps someone with experience in this might chime in.
the loci images would have to conform to the kolla abit which requires a few files like kolla_start to existit but it principal it could if that requirement was fulfilled.
On Mon, 2019-01-28 at 16:31 +0000, Sean Mooney wrote: this is the kolla image api for reference https://docs.openstack.org/kolla/latest/admin/kolla_api.html https://github.com/openstack/kolla/blob/master/doc/source/admin/kolla_api.rs... all kolla images share that external facing api so if you use loci to build an image an then inject the required api shim as a layer it would work. you can also use the iamge manually the same way by defining the relevent env varibale or monting configs. docker run -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS \ -e KOLLA_CONFIG_FILE=/config.json \ -v /path/to/config.json:/config.json kolla-image of cource you can bypass it too and execute command directly in the contienr too e.g. just start nova-compute. the point was to define a commmon way to inject configuration, include what command to run externally after the image was built so that they could be reused by different deployment tools like kolla-k8s, tripleo or just a buch or bash commands. the workflow is the same. prepfare a directory with a buch of config files for the service. spawn the container with that directory bind mounted into the container and set an env var to point at the kolla config.json that specifed where teh config shoudl be copied, with what owership/permission and what command to run. im not sure if thsi is a good or a bad thing but any tool that supported the kolla image api should be able to use loci built image if those image suport it too.
Best, -jay