[all][gate] ceph jobs failing with NoValidHost
melanie witt
melwittt at gmail.com
Fri Jul 24 20:17:39 UTC 2020
On 7/24/20 12:51, melanie witt wrote:
> Hey all,
>
> The nova-ceph-multistore job (devstack-plugin-ceph-tempest-py3 + tweaks
> to make it run with multiple glance stores) is failing at around a 80%
> rate as of today. We are tracking the work in this bug:
>
> https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1888895
>
> The TL;DR on this is that the ceph bluestore backend when backed by a
> file will create the file if it doesn't already exist and will create it
> with a default size. Prior to today, we were pulling ceph version
> 14.2.10 which defaults the file size to 100G. Then today, we started
> pulling ceph version 14.2.2 which defaults the file size to 10G which
> isn't enough space and we're getting NoValidHost with no allocation
> candidates being returned from placement.
>
> We don't know yet what caused us to start pulling an older version tag
> for ceph.
>
> We are currently trying out a WIP fix in the devstack-plugin-ceph repo
> to configure the bluestore_block_file_size to a reasonable value instead
s/bluestore_block_file_size/bluestore_block_size/
> of relying on the default:
>
> https://review.opendev.org/742961
>
> We'll keep you updated on the progress as we work on it.
Updating this to [all][gate] because it appears it's not only the
nova-ceph-multistore job that's affected but all
devstack-plugin-ceph-tempest-py3 jobs. We found NoValidHost failures on
patches proposed to openstack/glance and openstack/tempest as well.
The fix for all should be the same (patch in devstack-plugin-ceph) so
once we get that working well, the ceph jobs should be fixed for
[all][gate].
-melanie
More information about the openstack-discuss
mailing list