[ops][cinder][kolla-ansible] cinder-backup fails if source disk not in nova az
Adam Zheng
adam.zheng at colorado.edu
Mon Feb 8 10:26:52 UTC 2021
Hello Alan,
Thank you for the clarification and the pointers.
I did also previously find that only the cinder client had the az option, which appeared to work on backing up a volume from another az.
However, is there a way to get this working from horizon? While I can certainly make pages for my users that they will need to use the cli to do backups I feel it is not very friendly on my part to do that.
For now if there is not a way, I may leave the az for cinder on nova so backups/restores work in properly in horizon. I can control where the data is in ceph; was mainly just hoping to set this in openstack for aesthetic/clarity (ie which datacenter they are saving their volumes) for users utilizing the horizon volumes interface.
Thanks,
--
Adam
From: Alan Bishop <abishop at redhat.com>
Date: Friday, February 5, 2021 at 1:49 PM
To: Adam Zheng <adam.zheng at colorado.edu>
Cc: "openstack-discuss at lists.openstack.org" <openstack-discuss at lists.openstack.org>
Subject: Re: [ops][cinder][kolla-ansible] cinder-backup fails if source disk not in nova az
On Fri, Feb 5, 2021 at 10:00 AM Adam Zheng <adam.zheng at colorado.edu<mailto:adam.zheng at colorado.edu>> wrote:
Hello,
I’ve been trying to get availability zones defined for volumes.
Everything works fine if I leave the zone at “nova”, all volume types work and backups/snapshots also work.
ie:
+------------------+----------------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+----------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | cs-os-ctl-001 | nova | enabled | up | 2021-02-05T17:22:51.000000 |
| cinder-scheduler | cs-os-ctl-003 | nova | enabled | up | 2021-02-05T17:22:54.000000 |
| cinder-scheduler | cs-os-ctl-002 | nova | enabled | up | 2021-02-05T17:22:56.000000 |
| cinder-volume | cs-os-ctl-001 at rbd-ceph-gp2 | nova | enabled | up | 2021-02-05T17:22:56.000000 |
| cinder-volume | cs-os-ctl-001 at rbd-ceph-st1 | nova | enabled | up | 2021-02-05T17:22:54.000000 |
| cinder-volume | cs-os-ctl-002 at rbd-ceph-gp2 | nova | enabled | up | 2021-02-05T17:22:50.000000 |
| cinder-volume | cs-os-ctl-003 at rbd-ceph-gp2 | nova | enabled | up | 2021-02-05T17:22:55.000000 |
| cinder-volume | cs-os-ctl-002 at rbd-ceph-st1 | nova | enabled | up | 2021-02-05T17:22:57.000000 |
| cinder-volume | cs-os-ctl-003 at rbd-ceph-st1 | nova | enabled | up | 2021-02-05T17:22:54.000000 |
| cinder-backup | cs-os-ctl-002 | nova | enabled | up | 2021-02-05T17:22:56.000000 |
| cinder-backup | cs-os-ctl-001 | nova | enabled | up | 2021-02-05T17:22:53.000000 |
| cinder-backup | cs-os-ctl-003 | nova | enabled | up | 2021-02-05T17:22:58.000000 |
+------------------+----------------------------+------+---------+-------+----------------------------+
However, if I apply the following changes:
cinder-api.conf
[DEFAULT]
default_availability_zone = not-nova
default_volume_type = ceph-gp2
allow_availability_zone_fallback=True
cinder-volume.conf
[rbd-ceph-gp2]
<…>
backend_availability_zone = not-nova
<…>
I’ll get the following
+------------------+----------------------------+----------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+----------------------------+----------+---------+-------+----------------------------+
| cinder-scheduler | cs-os-ctl-001 | nova | enabled | up | 2021-02-05T17:22:51.000000 |
| cinder-scheduler | cs-os-ctl-003 | nova | enabled | up | 2021-02-05T17:22:54.000000 |
| cinder-scheduler | cs-os-ctl-002 | nova | enabled | up | 2021-02-05T17:22:56.000000 |
| cinder-volume | cs-os-ctl-001 at rbd-ceph-gp2 | not-nova | enabled | up | 2021-02-05T17:22:56.000000 |
| cinder-volume | cs-os-ctl-001 at rbd-ceph-st1 | nova | enabled | up | 2021-02-05T17:22:54.000000 |
| cinder-volume | cs-os-ctl-002 at rbd-ceph-gp2 | not-nova | enabled | up | 2021-02-05T17:22:50.000000 |
| cinder-volume | cs-os-ctl-003 at rbd-ceph-gp2 | not-nova | enabled | up | 2021-02-05T17:22:55.000000 |
| cinder-volume | cs-os-ctl-002 at rbd-ceph-st1 | nova | enabled | up | 2021-02-05T17:22:57.000000 |
| cinder-volume | cs-os-ctl-003 at rbd-ceph-st1 | nova | enabled | up | 2021-02-05T17:22:54.000000 |
| cinder-backup | cs-os-ctl-002 | nova | enabled | up | 2021-02-05T17:22:56.000000 |
| cinder-backup | cs-os-ctl-001 | nova | enabled | up | 2021-02-05T17:22:53.000000 |
| cinder-backup | cs-os-ctl-003 | nova | enabled | up | 2021-02-05T17:22:58.000000 |
+------------------+----------------------------+----------+---------+-------+----------------------------+
At this point, creating new volumes still work and go into the expected ceph pools.
However, backups no longer work for the cinder-volume that is not nova.
In the above example, it still works fine for volumes that that were created with type “ceph-gp2” in az “nova”.
Does not work for volumes that were created with type “ceph-st1” in az “not-nova”. It fails immediately and goes into error state with reason “Service not found for creating backup.”
Hi Adam,
Cinder's backup service has the ability to create backups of volumes in another AZ. The 'cinder' CLI supports this feature as of microversion 3.51. (bear in mind the 'openstack' client doesn't support microversions for the cinder (volume) service, so you'll need to use the 'cinder' command.
Rather than repeat what I've written previously, I refer you to [1] for additional details.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1649845#c4
One other thing to note is the corresponding "cinder backup-restore" command currently does not support restoring to a volume in another AZ, but there is a workaround. You can pre-create a new volume in the destination AZ, and use the ability to restore a backup to a specific volume (which just happens to be in your desired AZ).
There's also a patch [2] under review to enhance the cinder shell so that both backup and restore shell commands work the same way.
[2] https://review.opendev.org/c/openstack/python-cinderclient/+/762020<https://secure-web.cisco.com/1hrYHQDQ6kxUUd_07Px7CO6di6oB79D53zKGIR_vPJ2lcipWw9j2QSEcwa84nQqil-SPAa0_z_d_ODf0_giBzaP7pNf-J4jdtSpsgajD6hgf4PYDMYPjQdT7WLpyhutiJ4DkTay7NCZH0XRsWXIYHMKZorIEJu4OXtRlU9MIF1REa3VPvOpKKsb1aBveoCxfEkWvNbYeOBmmD-eGhS8upWeKnWnfgLUqo0N21HXN4qW0HyFTV372apM2H010yWmRxfVKi_SIx9IJjZnA6Qht6YaEI4FMLN0FscExnzIV5KU8xoy_k_uisPki4nkKj9JBp4yBvqA7t-AzX-7u2clp2w8ku4yrj4Ar_H4Rw94xoOnsz5qIkhu8j7hpApR5HXe40LS8JHeySxg4JFbyxN4KmGanmUxYgDMRnrU4OtJhsiDZfUKHUaNH6P_8HPJnhk_H7VoP0kmeSbO-BnoSJHz4i3w/https%3A%2F%2Freview.opendev.org%2Fc%2Fopenstack%2Fpython-cinderclient%2F%2B%2F762020>
Alan
I suspect I need to try to get another set of “cinder-backup” services running in the Zone “not-nova”, but cannot seem to figure out how.
I’ve scoured the docs on cinder.conf, and if I set default zones in cinder-backup (I’ve tried backend_availability_zone, default_availability_zone, and storage_availability_zone) I cannot seem to get backups working if the disk it’s backing up is not in az “nova”. The cinder-backup service in volume service list will always show “nova” no matter what I put there.
Any advice would be appreciated.
OpenStack Victoria deployed via kolla-ansible
Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210208/dfc7e855/attachment-0001.html>
More information about the openstack-discuss
mailing list