[openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

Bogdan Dobrelya bdobreli at redhat.com
Wed Nov 28 17:29:41 UTC 2018


On 11/28/18 6:02 PM, Jiří Stránský wrote:
> <snip>
> 
>>
>> Reiterating again on previous points:
>>
>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm
>> -ev --nodeps'.
>> -Puppet and Ruby *are* required for configuration. We can certainly put
>> them in a separate container outside of the runtime service containers
>> but doing so would actually cost you much more space/bandwidth for each
>> service container. As both of these have to get downloaded to each node
>> anyway in order to generate config files with our current mechanisms
>> I'm not sure this buys you anything.
> 
> +1. I was actually under the impression that we concluded yesterday on 
> IRC that this is the only thing that makes sense to seriously consider. 
> But even then it's not a win-win -- we'd gain some security by leaner 
> production images, but pay for it with space+bandwidth by duplicating 
> image content (IOW we can help achieve one of the goals we had in mind 
> by worsening the situation w/r/t the other goal we had in mind.)
> 
> Personally i'm not sold yet but it's something that i'd consider if we 
> got measurements of how much more space/bandwidth usage this would 
> consume, and if we got some further details/examples about how serious 
> are the security concerns if we leave config mgmt tools in runtime images.
> 
> IIRC the other options (that were brought forward so far) were already 
> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind 
> mounting being too hacky and fragile, and nsenter not really solving the 
> problem (because it allows us to switch to having different bins/libs 
> available, but it does not allow merging the availability of bins/libs 
> from two containers into a single context).
> 
>>
>> We are going in circles here I think....
> 
> +1. I think too much of the discussion focuses on "why it's bad to have 
> config tools in runtime images", but IMO we all sorta agree that it 
> would be better not to have them there, if it came at no cost.
> 
> I think to move forward, it would be interesting to know: if we do this 
> (i'll borrow Dan's drawing):
> 
> |base container| --> |service container| --> |service container w/
> Puppet installed|
> 
> How much more space and bandwidth would this consume per node (e.g. 
> separately per controller, per compute). This could help with decision 
> making.

As I've already evaluated in the related bug, that is:

puppet-* modules and manifests ~ 16MB
puppet with dependencies ~61MB
dependencies of the seemingly largest a dependency, systemd ~190MB

that would be an extra layer size for each of the container images to be 
downloaded/fetched into registries.

Given that we should decouple systemd from all/some of the dependencies 
(an example topic for RDO [0]), that could save a 190MB. But it seems we 
cannot break the love of puppet and systemd as it heavily relies on the 
latter and changing packaging like that would higly likely affect 
baremetal deployments with puppet and systemd co-operating.

Long story short, we cannot shoot both rabbits with a single shot, not 
with puppet :) May be we could with ansible replacing puppet fully...
So splitting config and runtime images is the only choice yet to address 
the raised security concerns. And let's forget about edge cases for now.
Tossing around a pair of extra bytes over 40,000 WAN-distributed 
computes ain't gonna be our the biggest problem for sure.

[0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction

> 
>>
>> Dan
>>
> 
> Thanks
> 
> Jirka
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando



More information about the openstack-discuss mailing list