[openstack-dev] [kolla][kubernetes] Mirantis participation in kolla-mesos project and shift towards Kubernetes
inc007 at gmail.com
Wed Apr 27 13:04:02 UTC 2016
On 26 April 2016 at 13:51, Tomasz Pa <ss7pro at gmail.com> wrote:
> Hey Steven,
> answers inline.
> On Mon, Apr 25, 2016 at 9:27 AM, Steven Dake (stdake) <stdake at cisco.com> wrote:
>> I disagree with your assertion. You are gaming your data to provide the
>> worst possible container (nova) because the RPMs pull in libvirt. Kolla
>> has no control over how Red Hat choses to package RDO, and at this time
>> they choose to package libvirt as a dependency thereof. Obviously it
>> would be more optimal in a proper container system not to include libvirt
>> in the dependencies installed with Nova. If you really want that, use
>> from source installs. Then you could shave 1 minute off your upgrade time
>> of a 64 node cluster.
> Look here: http://paste.openstack.org/show/495459/ . As you can see
> there're no libvirt dependencies there. It's only python-nova deps
>> A DSL does not solve this problem unless the DSL contains every dependency
>> to install (from binary). I don't see this as maintainable.
> Agree being to detailed within the DSL can make the maintenance a
> nightmare. I was thinking about some build automation which can
> dependencies (ie: repoquery --requires python-nova) and put each one
> into a separate layer. We will just need a basic DSL with same
> complexity as we have now in Dockerfiles which will be building
> Dockerfiles dynamically.
> Other approach could be setting a dedicated image for each of the
> dependency and bind them together into the single image during build.
> I'm also having Alpine linux in mind, this together with bazel can
> make images really small.
>> Just as a conclusion, deploying each dependency in a separate layer is an
>> absolutely terrible idea. Docker performance atleast is negatively
>> affected by large layer counts, and aufs has a limit of 42 layers, so your
>> idea is not viable asp resented.
> This limit was 127 back in 2013.
Hard limit, but it stops working properly before. Around 40-50 layers.
We need to be careful about our layering. Even 127 layers is not
enough if you'd want to install every single dependency as a new
layer. RKT with it's image squashing might help, but we don't run RKT
As for Alpine linux goes, how exactly would you install things like
galera (we don't really need galera for k8s, but we do for ansible and
we will not kill ansible.)? Rabbitmq? Ceph? Stuff that are not from
We need reliable repos that are well maintained and publicly
available, can you provide that for Alpine? Centos community /
Canonical have dedicated people to package stuff and they still
struggle (and do amazing job, thanks guys!), can Alpine community
provide us same level of maintenance?
Galera https://bugs.alpinelinux.org/issues/4646 is not available
Ceph https://bugs.alpinelinux.org/issues/4646 is not available (not even librbd)
That was 5 minutes of research.
Bottom line, we can't deploy Alpine because we don't have packages. We
can't create our own repo for it because we don't have hardware nor
human resources to support it, this is big job.
Having thin images is a noble goal and we all want to do it, we
appreciate any input/help, but as you see there are good (I think)
reasons why we did what we did.
> Tomasz Paszkowski
> SS7, Asterisk, SAN, Datacenter, Cloud Computing
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
More information about the OpenStack-dev