[cinder-backup][ceph] cinder-backup support of incremental backup with ceph backend

Eugen Block eblock at nde.ag
Tue May 23 13:58:19 UTC 2023


I see the same for Wallaby, object_count is always 0.

Zitat von Eugen Block <eblock at nde.ag>:

> Hi,
>
> I don't see an object_count > 0 for all incremental backups or the  
> full backup. I tried both with a "full" volume (from image) as well  
> as en empty volume, put a filesystem on it and copied tiny files  
> onto it. This is the result:
>
> controller02:~ # openstack volume backup list
> +--------------------------------------+--------------+-------------+-----------+------+
> | ID                                   | Name         | Description  
> | Status    | Size |
> +--------------------------------------+--------------+-------------+-----------+------+
> | a8a448e7-8bfd-46e3-81bf-3b1d607893e7 | inc-backup2  | None         
> | available |    4 |
> | 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 | inc-backup1  | None         
> | available |    4 |
> | 125c23cd-a5e8-4a7a-b59a-015d0bc5902c | full-backup1 | None         
> | available |    4 |
> +--------------------------------------+--------------+-------------+-----------+------+
>
> controller02:~ # for i in `openstack volume backup list -c ID -f  
> value`; do openstack volume backup show $i -c id -c is_incremental  
> -c object_count -f value; done
> a8a448e7-8bfd-46e3-81bf-3b1d607893e7
> True
>
> 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7
> True
>
> 125c23cd-a5e8-4a7a-b59a-015d0bc5902c
> False
>
>
> This is still Victoria, though, I think I have a Wallaby test  
> installation, I'll try that as well. In which case should  
> object_count be > 0? All my installations have ceph as storage  
> backend.
>
> Thanks,
> Eugen
>
> Zitat von Masayuki Igawa <masayuki.igawa at gmail.com>:
>
>> Hi Satish,
>>
>>> Whenever I take incremental backup it shows a similar size of original
>>> volume. Technically It should be smaller. Question is does ceph support
>>> incremental backup with cinder?
>>
>> IIUC, it would be expected behavior. According to the API Doc[1],
>> "size" is "The size of the volume, in gibibytes (GiB)."
>> So, it's not the actual size of the snapshot itself.
>>
>> What about the "object_count" of "openstack volume backup show" output?
>> The incremental's one should be zero or less than the full backup at least?
>>
>> [1]  
>> https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-detail-detail,list-backups-with-detail-detail#id428
>>
>> -- Masayuki Igawa
>>
>> On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
>>> Folks,
>>>
>>> I have ceph storage for my openstack and configure cinder-volume and
>>> cinder-backup service for my disaster solution. I am trying to use the
>>> cinder-backup incremental option to save storage space but somehow It
>>> doesn't work the way it should work.
>>>
>>> Whenever I take incremental backup it shows a similar size of original
>>> volume. Technically It should be smaller. Question is does ceph support
>>> incremental backup with cinder?
>>>
>>> I am running a Yoga release.
>>>
>>> $ openstack volume list
>>> +--------------------------------------+------------+------------+------+-------------------------------------+
>>> | ID                                   | Name       | Status     | Size
>>> | Attached to                         |
>>> +--------------------------------------+------------+------------+------+-------------------------------------+
>>> | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up |   10
>>> | Attached to spatel-foo on /dev/sdc  |
>>> +--------------------------------------+------------+------------+------+-------------------------------------+
>>>
>>> ### Create full backup
>>> $ openstack volume backup create --name spatel-vol-backup  
>>> spatel-vol --force
>>> +-------+--------------------------------------+
>>> | Field | Value                                |
>>> +-------+--------------------------------------+
>>> | id    | 4351d9d3-85fa-4cd5-b21d-619b3385aefc |
>>> | name  | spatel-vol-backup                    |
>>> +-------+--------------------------------------+
>>>
>>> ### Create incremental
>>> $ openstack volume backup create --name spatel-vol-backup-1
>>> --incremental --force spatel-vol
>>> +-------+--------------------------------------+
>>> | Field | Value                                |
>>> +-------+--------------------------------------+
>>> | id    | 294b58af-771b-4a9f-bb7b-c37a4f84d678 |
>>> | name  | spatel-vol-backup-1                  |
>>> +-------+--------------------------------------+
>>>
>>> $ openstack volume backup list
>>> +--------------------------------------+---------------------+-------------+-----------+------+
>>> | ID                                   | Name                |
>>> Description | Status    | Size |
>>> +--------------------------------------+---------------------+-------------+-----------+------+
>>> | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None
>>>   | available |   10 |
>>> | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup   | None
>>>   | available |   10 |
>>> +--------------------------------------+---------------------+-------------+-----------+------+
>>> My incremental backup still shows 10G size which should be lower
>>> compared to the first backup.






More information about the openstack-discuss mailing list