[openstack-dev] Disk space requirement - any way to lower it a little?
cjeanner at redhat.com
Fri Jul 20 11:48:53 UTC 2018
On 07/20/2018 09:49 AM, Cédric Jeanneret wrote:
> On 07/19/2018 06:55 PM, Paul Belanger wrote:
>> On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote:
>>> While trying to get a new validation¹ in the undercloud preflight
>>> checks, I hit an (not so) unexpected issue with the CI:
>>> it doesn't provide flavors with the minimal requirements, at least
>>> regarding the disk space.
>>> A quick-fix is to disable the validations in the CI - Wes has already
>>> pushed a patch for that in the upstream CI:
>>> We can consider this as a quick'n'temporary fix².
>>> The issue is on the RDO CI: apparently, they provide instances with
>>> "only" 55G of free space, making the checks fail:
>>> So, the question is: would it be possible to lower the requirment to,
>>> let's say, 50G? Where does that 60G³ come from?
>>> Thanks for your help/feedback.
>>> ¹ https://review.openstack.org/#/c/582917/
>>> ² as you might know, there's a BP for a unified validation framework,
>>> and it will allow to get injected configuration in CI env in order to
>>> lower the requirements if necessary:
>> Keep in mind, upstream we don't really have control over partitions of nodes, in
>> some case it is a single, other multiple. I'd suggest looking more at:
> After some checks on y locally deployed containerized undercloud (hence,
> Rocky) without real activity, here's what I could get:
> - most data are located in /var - this explains the current check.
> If we go a bit deeper, here are the "actually used" directory in /var/lib:
> 20K alternatives
> 36K certmonger
> 4.0K chrony
> 1.2G config-data
> 4.0K dhclient
> 6.0G docker
> 28K docker-config-scripts
> 92K docker-container-startup-configs.json
> 44K docker-puppet
> 592K heat-config
> 832K ironic
> 4.0K ironic-inspector
> 236K kolla
> 4.0K logrotate
> 286M mysql
> 48K neutron
> 4.0K ntp
> 4.0K postfix
> 872K puppet
> 3.8M rabbitmq
> 59M rpm
> 4.0K rsyslog
> 64K systemd
> 20K tripleo
> 236K tripleo-config
> 9.8M yum
> 7.5G total
> Most of the "default installer" partition schema don't go further than
> putting /var, /tmp, /home and /usr in dedicated volumes - of course,
> end-user can chose to ignore that and provide a custom schema.
> That said, we can get the "used" paths. In addition to /var/lib, there's
> obviously /usr.
> We might want to:
> - loop on known locations
> - check if they are on dedicated mount points
> - check the available disk space on those mount points.
> An interesting thing in bash:
> df /var/lib/docker
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/sda1 104846316 10188828 94657488 10% /
> This allows to get:
> - actual volume
> - free space on the volume.
> More than that, we might also try to figure out some pattern. For
> instance, "docker" seems to be a pretty good candidate for space, as it
> will get the images and container data. This is probably even the
> biggest eater, at least on the undercloud - as well as the logs (/var/logs).
> We might do a check ensuring we can, at least, DEPLOY the app. This
> would require far less than the required 60G, and with a proper doc
> announcing that, we can get a functional test, aiming on its purpose:
> ensure we can deploy (so asking, let's say, 10G in /var/lib/docker, 5G
> in /var/lib/config-data, 5G in /usr, 1G in /var/log) and, later, upgrade
> (requiring the same amount of *free* space).
> That would require some changes in the validation check of course. But
> at least, we might get a pretty nice covering, while allowing it to run
> smoothly in the CI.
> But, as said: proper documentation should be set, and the "60G minimum
> required" should be rephrased in order to point the locations needing
> space (with the appropriate warning about "none exhaustiveness" and the
> Would that suit better the actual needs, and allow to get a proper disk
> space check/validation?
Following those thoughts, here's a proposal, to be discussed, augmented,
This should allow to get a really nice space check, and in addition
allow ops to create a layout suited for the undercloud if they want -
getting dedicated volumes for specific uses, allowing to get a smart
monitoring of the disk usage per resources is always good.
It kind of also allow to sort out the issue of the CI, providing we
update the doc to reflect the "new" reality of this validation and
expose the "real" needs of the undercloud regarding disk space. An
operator will also more agree to give space if he knows why.
What do you think?
>> As for downstream RDO, the same is going to apply once we start adding more
>> cloud providers. I would look to see if you actually need that much space for
>> deployments, and make try to mock the testing of that logic.
>> - Paul
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 833 bytes
Desc: OpenPGP digital signature
More information about the OpenStack-dev