Restart cinder-volume with Ceph rdb

ManuParra mparra at iaa.es
Thu May 13 06:30:38 UTC 2021


Hi Gorka again, yes, the first thing is to know why you can't connect to that host (Ceph is actually set up for HA) so that's the way to do it. I tell you this because previously from the beginning of the setup of our setup it has always been like that, with that hostname and there has been no problem.

As for the errors, the strangest thing is that in Monasca I have not found any error log, only warning on “volume service is down. (host: rbd:volumes at ceph-rbd)" and info, which is even stranger.

Regards.

> On 12 May 2021, at 23:34, Gorka Eguileor <geguileo at redhat.com> wrote:
> 
> On 12/05, ManuParra wrote:
>> Hi Gorka, let me show the cinder config:
>> 
>> [ceph-rbd]
>> rbd_ceph_conf = /etc/ceph/ceph.conf
>> rbd_user = cinder
>> backend_host = rbd:volumes
>> rbd_pool = cinder.volumes
>> volume_backend_name = ceph-rbd
>> volume_driver = cinder.volume.drivers.rbd.RBDDriver
>>>> 
>> So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host.
> 
> Hi,
> 
> Your backend_host doesn't have a valid hostname, please set a proper
> hostname in that configuration option.
> 
> Then the next thing you need to have is the cinder-volume service
> running correctly before making any requests.
> 
> I would try adding rbd_exclusive_cinder_pool=true then tailing the
> volume logs, and restarting the service.
> 
> See if the logs show any ERROR level entries.
> 
> I would also check the service-list output right after the service is
> restarted, if it's up then I would check it again after 2 minutes.
> 
> Cheers,
> Gorka.
> 
> 
>> 
>> Regards.
>> 
>> 
>>> On 12 May 2021, at 11:49, Gorka Eguileor <geguileo at redhat.com> wrote:
>>> 
>>> On 12/05, ManuParra wrote:
>>>> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list.
>>>> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings.
>>>> The error I have is that cinder can’t  connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance.
>>>> Kind regards.
>>>> 
>>> 
>>> Hi,
>>> 
>>> You are most likely using an older release, have a high number of cinder
>>> RBD volumes, and have not changed configuration option
>>> "rbd_exclusive_cinder_pool" from its default "false" value.
>>> 
>>> Please add to your driver's section in cinder.conf the following:
>>> 
>>> rbd_exclusive_cinder_pool = true
>>> 
>>> 
>>> And restart the service.
>>> 
>>> Cheers,
>>> Gorka.
>>> 
>>>>> On 11 May 2021, at 22:30, Eugen Block <eblock at nde.ag> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> so restart the volume service;-)
>>>>> 
>>>>> systemctl restart openstack-cinder-volume.service
>>>>> 
>>>>> 
>>>>> Zitat von ManuParra <mparra at iaa.es>:
>>>>> 
>>>>>> Dear OpenStack community,
>>>>>> 
>>>>>> I have encountered a problem a few days ago and that is that when creating new volumes with:
>>>>>> 
>>>>>> "openstack volume create --size 20 testmv"
>>>>>> 
>>>>>> the volume creation status shows an error.  If I go to the error log detail it indicates:
>>>>>> 
>>>>>> "Schedule allocate volume: Could not find any available weighted backend".
>>>>>> 
>>>>>> Indeed then I go to the cinder log and it indicates:
>>>>>> 
>>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”.
>>>>>> 
>>>>>> I check with:
>>>>>> 
>>>>>> "openstack volume service list”  in which state are the services and I see that indeed this happens:
>>>>>> 
>>>>>> 
>>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 |
>>>>>> 
>>>>>> And stopped since 2021-04-29 !
>>>>>> 
>>>>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working.
>>>>>> 
>>>>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok).
>>>>>> 
>>>>>> Thank you very much in advance. Regards.
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210513/ee494192/attachment.html>


More information about the openstack-discuss mailing list