Restart cinder-volume with Ceph rdb
ManuParra
mparra at iaa.es
Wed May 12 21:12:29 UTC 2021
Hi Laurent, I included the Debug=True mode for Cinder-Volumes and Cinder-Scheduler, and the result is that I now have the following in the Debug:
DEBUG cinder.volume.drivers.rbd [req-a0cb90b6-ca5d-496c-9a0b-e2296f1946ca - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431
DEBUG cinder.volume.drivers.rbd [req-a0cb90b6-ca5d-496c-9a0b-e2296f1946ca - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431
DEBUG cinder.volume.drivers.rbd [req-a0cb90b6-ca5d-496c-9a0b-e2296f1946ca - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431
Every time a new volume is requested cinder-volumes is called which is a ceph-rbd pool.
I have restarted all cinder services on the three controller/monitor nodes I have and also restarted all ceph daemons, but I still see that when doing
openstack volume service list
+------------------+----------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+----------------------+------+---------+-------+----------------------------+
| cinder-scheduler | spsrc-contr-1 | nova | enabled | up | 2021-05-11T10:06:39.000000 |
| cinder-scheduler | spsrc-contr-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 |
| cinder-scheduler | spsrc-contr-3 | nova | enabled | up | 2021-05-11T10:06:39.000000 |
| cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-05-11T10:48:42.000000 |
| cinder-backup | spsrc-mon-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 |
| cinder-backup | spsrc-mon-1 | nova | enabled | up | 2021-05-11T10:06:44.000000 |
| cinder-backup | spsrc-mon-3 | nova | enabled | up | 2021-05-11T10:06:47.000000 |
+------------------+----------------------+------+---------+-------+----------------------------+
cinder-volume is down and cannot create new volumes to associate to a VM.
Kind regards.
> On 12 May 2021, at 03:43, DHilsbos at performair.com <mailto:DHilsbos at performair.com> wrote:
>
> Is this a new cluster, or one that has been running for a while?
>
> Did you just setup integration with Ceph?
>
> This part: "rbd:volumes at ceph-rbd" doesn't look right to me. For me (Victoria / Nautilus) this looks like: <cinder-volume-host>:<name>.
>
> name is configured in the cinder.conf with a [<name>] section, and enabled_backends=<name> in the [DEFAULT] section.
> cinder-volume-host is something that resolves to the host running openstack-cinder-volume.service.
>
> What version of OpenStack, and what version of Ceph are you running?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Vice President – Information Technology
> Perform Air International Inc.
> DHilsbos at PerformAir.com <mailto:DHilsbos at PerformAir.com>
> www.PerformAir.com <http://www.performair.com/>
>
>
> -----Original Message-----
> From: ManuParra [mailto:mparra at iaa.es <mailto:mparra at iaa.es>]
> Sent: Tuesday, May 11, 2021 3:00 PM
> To: Eugen Block
> Cc: openstack-discuss at lists.openstack.org <mailto:openstack-discuss at lists.openstack.org>
> Subject: Re: Restart cinder-volume with Ceph rdb
>
> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list.
> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings.
> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance.
> Kind regards.
>
>> On 11 May 2021, at 22:30, Eugen Block <eblock at nde.ag <mailto:eblock at nde.ag>> wrote:
>>
>> Hi,
>>
>> so restart the volume service;-)
>>
>> systemctl restart openstack-cinder-volume.service
>>
>>
>> Zitat von ManuParra <mparra at iaa.es <mailto:mparra at iaa.es>>:
>>
>>> Dear OpenStack community,
>>>
>>> I have encountered a problem a few days ago and that is that when creating new volumes with:
>>>
>>> "openstack volume create --size 20 testmv"
>>>
>>> the volume creation status shows an error. If I go to the error log detail it indicates:
>>>
>>> "Schedule allocate volume: Could not find any available weighted backend".
>>>
>>> Indeed then I go to the cinder log and it indicates:
>>>
>>> "volume service is down - host: rbd:volumes at ceph-rbd”.
>>>
>>> I check with:
>>>
>>> "openstack volume service list” in which state are the services and I see that indeed this happens:
>>>
>>>
>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 |
>>>
>>> And stopped since 2021-04-29 !
>>>
>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working.
>>>
>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok).
>>>
>>> Thank you very much in advance. Regards.
>>
>>
>>
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210512/a19aaab2/attachment.html>
More information about the openstack-discuss
mailing list