[cinder] Cinder & Ceph Integration Error: No Valid Backend

SSelf at performair.com SSelf at performair.com
Thu Jan 7 23:28:10 UTC 2021


The overall issue has been resolved.

There were two major causes:

Misplacement of keyring(s) (they were not within /etc/ceph/)
'openstack-cinder-volume' service was not started/enabled

Thank you,

Stephen Self 
IT Manager 

sself at performair.com
463 South Hamilton Court 
Gilbert, Arizona 85233 
Phone: (480) 610-3500 
Fax: (480) 610-3501 


-----Original Message-----
From: SSelf at performair.com [mailto:SSelf at performair.com] 
Sent: Thursday, January 7, 2021 2:21 PM
To: ceph-users at ceph.io; openstack-discuss at lists.openstack.org
Subject: [ceph-users] [cinder] Cinder & Ceph Integration Error: No Valid Backend


We're having problems with our Openstack/Ceph integration. The versions we're using are Ussuri & Nautilus.

When trying to create a volume, the volume is created, though the status is stuck at 'ERROR'.

This appears to be the most relevant line from the Cinder scheduler.log:

2021-01-07 14:00:38.473 140686 ERROR cinder.scheduler.flows.create_volume [req-f86556b5-cb2e-4b2d-b556-ed07e632289d 824c26c133b34d8b8e84a7acabbe6f91 a983323b5ffc47e18660794cd9344869 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available

Here is the 'cinder.conf' from our Controller Node:

# define own IP address
my_ip =
log_dir = /var/log/cinder
state_path = /var/lib/cinder
auth_strategy = keystone
enabled_backends = ceph
glance_api_version = 2
debug = true

# RabbitMQ connection info
transport_url = rabbit://openstack:<password>@
enable_v3_api = True

# MariaDB connection info
connection = mysql+pymysql://cinder:<password>@

# Keystone auth info
www_authenticate_uri =
auth_url =
memcached_servers =
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = <password>

lock_path = $state_path/tmp

volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = rbd_os_volumes
rbd_ceph_conf = /etc/ceph/463/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_exclusive_cinder_pool = true

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/300/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = rbd_os_backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

Does anyone have any ideas as to what is going wrong?

Thank you,

Stephen Self 
IT Manager 
Perform Air International
sself at performair.com
ceph-users mailing list -- ceph-users at ceph.io
To unsubscribe send an email to ceph-users-leave at ceph.io

More information about the openstack-discuss mailing list