Restart cinder-volume with Ceph rdb
ManuParra
mparra at iaa.es
Tue May 11 22:00:19 UTC 2021
Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list.
The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings.
The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance.
Kind regards.
> On 11 May 2021, at 22:30, Eugen Block <eblock at nde.ag> wrote:
>
> Hi,
>
> so restart the volume service;-)
>
> systemctl restart openstack-cinder-volume.service
>
>
> Zitat von ManuParra <mparra at iaa.es>:
>
>> Dear OpenStack community,
>>
>> I have encountered a problem a few days ago and that is that when creating new volumes with:
>>
>> "openstack volume create --size 20 testmv"
>>
>> the volume creation status shows an error. If I go to the error log detail it indicates:
>>
>> "Schedule allocate volume: Could not find any available weighted backend".
>>
>> Indeed then I go to the cinder log and it indicates:
>>
>> "volume service is down - host: rbd:volumes at ceph-rbd”.
>>
>> I check with:
>>
>> "openstack volume service list” in which state are the services and I see that indeed this happens:
>>
>>
>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 |
>>
>> And stopped since 2021-04-29 !
>>
>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working.
>>
>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok).
>>
>> Thank you very much in advance. Regards.
>
>
>
>
More information about the openstack-discuss
mailing list