Restart cinder-volume with Ceph rdb
Laurent Dumont
laurentfdumont at gmail.com
Tue May 11 23:18:24 UTC 2021
The default error messages for cinder-volume can be pretty vague. I would
suggest enabling Debug for Cinder + service restart and seeing the error
logs when the service goes up --> down. That should be in the
cinder-volumes logs.
On Tue, May 11, 2021 at 6:05 PM ManuParra <mparra at iaa.es> wrote:
> Thanks, I have restarted the service and I see that after a few minutes
> then cinder-volume service goes down again when I check it with the command
> openstack volume service list.
> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd
> that is RDB in Ceph, so the problem does not come from Cinder, rather from
> Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked
> Ceph and the status of everything is correct, no errors or warnings.
> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd.
> Any further suggestions? Thanks in advance.
> Kind regards.
>
> > On 11 May 2021, at 22:30, Eugen Block <eblock at nde.ag> wrote:
> >
> > Hi,
> >
> > so restart the volume service;-)
> >
> > systemctl restart openstack-cinder-volume.service
> >
> >
> > Zitat von ManuParra <mparra at iaa.es>:
> >
> >> Dear OpenStack community,
> >>
> >> I have encountered a problem a few days ago and that is that when
> creating new volumes with:
> >>
> >> "openstack volume create --size 20 testmv"
> >>
> >> the volume creation status shows an error. If I go to the error log
> detail it indicates:
> >>
> >> "Schedule allocate volume: Could not find any available weighted
> backend".
> >>
> >> Indeed then I go to the cinder log and it indicates:
> >>
> >> "volume service is down - host: rbd:volumes at ceph-rbd”.
> >>
> >> I check with:
> >>
> >> "openstack volume service list” in which state are the services and I
> see that indeed this happens:
> >>
> >>
> >> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down |
> 2021-04-29T09:48:42.000000 |
> >>
> >> And stopped since 2021-04-29 !
> >>
> >> I have checked Ceph (monitors,managers, osds. etc) and there are no
> problems with the Ceph BackEnd, everything is apparently working.
> >>
> >> This happened after an uncontrolled outage.So my question is how do I
> restart only cinder-volumes (I also have cinder-backup, cinder-scheduler
> but they are ok).
> >>
> >> Thank you very much in advance. Regards.
> >
> >
> >
> >
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210511/fcf2eee4/attachment-0001.html>
More information about the openstack-discuss
mailing list