[zun] Disk quota feature

Hongbin Lu kira034 at 163.com
Wed Jul 21 08:29:55 UTC 2021


Zun relies on Docker to provide the disk quota feature. In particular, it depends on the storage driver and backing fs you configured. Below is quoted from Docker website [1]:


"This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size less than the backing fs size."


You can configure your Docker as suggested above, then restart the zun-compute process in the compute host. Let me know if it still doesn't work for you.



[1] https://docs.docker.com/engine/reference/commandline/run/#set-storage-driver-options-per-container




Best regards,
Hongbin
















At 2021-07-17 02:34:48, "Cristina Mayo Sarmiento" <admin at gsic.uva.es> wrote:

Hi,


I've installed Openstack Wallaby of Ubuntu 20.04.2. I'm trying to specify quotas in zun containers but the zun-compute log's show this issue:
      > zun.common.exception.Invalid: Your host does not support disk quota feature.


Does exists anyway to solve that?


Currently, zun containers can "see" compute node device's mounted in /var completely:
node at compute:~$df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      126G     0  126G   0% /dev
tmpfs                      26G  2.4M   26G   1% /run
/dev/mapper/vg0-lv--root   98G   12G   82G  13% /
tmpfs                     126G     0  126G   0% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                     126G     0  126G   0% /sys/fs/cgroup
/dev/sda3                 976M  203M  707M  23% /boot
/dev/sda1                 511M  7.9M  504M   2% /boot/efi
/dev/mapper/vg0-lv--var   2.9T  965M  2.8T   1% /var
/dev/loop0                 56M   56M     0 100% /snap/core18/2074
/dev/loop2                 68M   68M     0 100% /snap/lxd/20326
/dev/loop1                 56M   56M     0 100% /snap/core18/1944
/dev/loop3                 32M   32M     0 100% /snap/snapd/10707
/dev/loop5                 33M   33M     0 100% /snap/snapd/12398
/dev/loop4                 70M   70M     0 100% /snap/lxd/19188
tmpfs                      26G     0   26G   0% /run/user/1000
overlay                   2.9T  965M  2.8T   1% /var/lib/docker/overlay2/0f88afac44d7a661c31d6b7eea85240a2ed8d84aec057f7f761be4a544d1e089/merged
shm                        64M     0   64M   0% /var/lib/docker/containers/a4d60834a7ad753bc2971a446dd65c8acabf4d6df074eec846af0ddb2efc84c9/mounts/shm
test at container:~$df -h
 df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                   2.9T    964.3M      2.7T   0% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                   125.9G         0    125.9G   0% /sys/fs/cgroup
/dev/mapper/vg0-lv--var
                          2.9T    964.3M      2.7T   0% /etc/resolv.conf
/dev/mapper/vg0-lv--var
                          2.9T    964.3M      2.7T   0% /etc/hostname
/dev/mapper/vg0-lv--var
                          2.9T    964.3M      2.7T   0% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                   125.9G         0    125.9G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                   125.9G         0    125.9G   0% /proc/scsi
tmpfs                   125.9G         0    125.9G   0% /sys/firmware


Thanks in advance!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210721/67c03e87/attachment-0001.html>


More information about the openstack-discuss mailing list