Hi Gorka, let me show the cinder config:

[ceph-rbd]
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
backend_host = rbd:volumes
rbd_pool = cinder.volumes
volume_backend_name = ceph-rbd
volume_driver = cinder.volume.drivers.rbd.RBDDriver

So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host.

Regards. 


On 12 May 2021, at 11:49, Gorka Eguileor <geguileo@redhat.com> wrote:

On 12/05, ManuParra wrote:
Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list.
The host/service that contains the cinder-volumes is rbd:volumes@ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings.
The error I have is that cinder can’t  connect to rbd:volumes@ceph-rbd. Any further suggestions? Thanks in advance.
Kind regards.


Hi,

You are most likely using an older release, have a high number of cinder
RBD volumes, and have not changed configuration option
"rbd_exclusive_cinder_pool" from its default "false" value.

Please add to your driver's section in cinder.conf the following:

rbd_exclusive_cinder_pool = true


And restart the service.

Cheers,
Gorka.

On 11 May 2021, at 22:30, Eugen Block <eblock@nde.ag> wrote:

Hi,

so restart the volume service;-)

systemctl restart openstack-cinder-volume.service


Zitat von ManuParra <mparra@iaa.es>:

Dear OpenStack community,

I have encountered a problem a few days ago and that is that when creating new volumes with:

"openstack volume create --size 20 testmv"

the volume creation status shows an error.  If I go to the error log detail it indicates:

"Schedule allocate volume: Could not find any available weighted backend".

Indeed then I go to the cinder log and it indicates:

"volume service is down - host: rbd:volumes@ceph-rbd”.

I check with:

"openstack volume service list”  in which state are the services and I see that indeed this happens:


| cinder-volume | rbd:volumes@ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 |

And stopped since 2021-04-29 !

I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working.

This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok).

Thank you very much in advance. Regards.