[cinder] cinder-backup volume stuck in creating

Nguyễn Hữu Khôi nguyenhuukhoinw at gmail.com
Sat Jul 15 01:02:48 UTC 2023


Hello guys,
I had the same problem, there is no more error log.

I resolved by turning off cinder-api, cinder-scheduler and cinder-backup
then removing these queues, such as cinder, cinder-scheduler and
cinder-backup..

I enabled debug log on rabbitmq and cinder, however, I cannot see any
related logs,


2023-07-13 20:57:22.903 666 INFO cinder.backup.manager
[req-019d2459-6f43-4839-8ad6-71c805dbd708 6371ebfe0fce499983b1f07e7c34bf6d
183e47a6dd14489991db5c3cf4132a2a - - -] Create backup started, backup:
2cebd275-6fc3-47ae-8c8d-3337f9227057 volume:
d8446fb4-032a-44a1-b47d-f922c4a0020e.
2023-07-13 20:57:22.919 666 INFO cinder.backup.manager
[req-019d2459-6f43-4839-8ad6-71c805dbd708 6371ebfe0fce499983b1f07e7c34bf6d
183e47a6dd14489991db5c3cf4132a2a - - -] Call Volume Manager to
get_backup_device for
Backup(availability_zone=None,container=None,created_at=2023-07-13T13:57:22Z,data_timestamp=2023-07-13T13:57:22Z,deleted=False,deleted_at=None,display_description='',display_name='ijoo',encryption_key_id=None,fail_reason=None,host='controller01',id=2cebd275-6fc3-47ae-8c8d-3337f9227057,metadata={},num_dependent_backups=0,object_count=0,parent=None,parent_id=None,project_id='183e47a6dd14489991db5c3cf4132a2a',restore_volume_id=None,service='cinder.backup.drivers.nfs.NFSBackupDriver',service_metadata=None,size=1,snapshot_id=None,status='creating',temp_snapshot_id=None,temp_volume_id=None,updated_at=None,user_id='6371ebfe0fce499983b1f07e7c34bf6d',volume_id=d8446fb4-032a-44a1-b47d-f922c4a0020e)


It is hard to reproduce this problems.


Nguyen Huu Khoi


On Wed, Feb 1, 2023 at 7:08 PM Gorka Eguileor <geguileo at redhat.com> wrote:

> On 27/01, Satish Patel wrote:
> > Thank you Jon/Sofia,
> >
> > Biggest issue is even if I turn on debugging, it's not producing enough
> > logs to see what is going on. See following output.
> >
> > https://paste.opendev.org/show/bh9OF9l2OrozrNMglv2Y/
>
> Hi,
>
> I don't see the cinder-volume logs, which are necessary to debug this
> issue, because we can see in the backup logs that it is doing an RPC
> call to the cinder-volume service "Call Volume Manager to
> get_backup_device".
>
> What is the value of the "host" field of the source volume?
>
> Because if it's anything other than "kolla-infra-1.example.com at rbd-1",
> then the problem is that the cinder-volume service for that backend is
> currently down.
>
> Cheers,
> Gorka.
>
> >
> > On Fri, Jan 27, 2023 at 10:50 AM Jon Bernard <jobernar at redhat.com>
> wrote:
> >
> > > Without the logs themselves it's really hard to say.  One way to
> proceed
> > > would be to file a bug [1] and the team can work with you there.  You
> > > could also enable debugging (debug = True), reproduce the failure, and
> > > upload the relevant logs there as well.
> > >
> > > [1]: https://bugs.launchpad.net/cinder/+filebug
> > >
> > > --
> > > Jon
> > >
> > > On Thu, Jan 26, 2023 at 2:20 PM Satish Patel <satish.txt at gmail.com>
> wrote:
> > >
> > >> Folks,
> > >>
> > >> I have configured nova and cinder with ceph storage. VMs running on
> ceph
> > >> storage but now when i am trying to create a backup of cinder volume
> its
> > >> getting stuck on creating and doing nothing. Logs also do not give any
> > >> indication of bad.
> > >>
> > >> My cinder.conf
> > >>
> > >> [DEFAULT]
> > >>
> > >> enabled_backends = rbd-1
> > >> backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
> > >> backup_ceph_conf = /etc/ceph/ceph.conf
> > >> backup_ceph_user = cinder-backup
> > >> backup_ceph_chunk_size = 134217728
> > >> backup_ceph_pool = backups
> > >> backup_ceph_stripe_unit = 0
> > >> backup_ceph_stripe_count = 0
> > >> restore_discard_excess_bytes = true
> > >> osapi_volume_listen = 10.73.0.181
> > >> osapi_volume_listen_port = 8776
> > >>
> > >>
> > >> Output of "openstack volume service list" showing cinder-backup
> service
> > >> is up but when i create a backup it's getting stuck in this stage and
> no
> > >> activity. I am not seeing anything getting transferred to the ceph
> backups
> > >> pool also. Any clue? or method to debug?
> > >>
> > >> # openstack volume backup list --all
> > >>
> > >>
> +--------------------------------------+------+-------------+----------+------+
> > >> | ID                                   | Name | Description | Status
>  |
> > >> Size |
> > >>
> > >>
> +--------------------------------------+------+-------------+----------+------+
> > >> | bc844d55-8c5a-4bd3-b0e9-7c4c780c95ad | foo1 |             |
> creating |
> > >>   20 |
> > >>
> > >>
> +--------------------------------------+------+-------------+----------+------+
> > >>
> > >
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230715/e6383cfb/attachment.htm>


More information about the openstack-discuss mailing list