[cinder-backup][ceph] cinder-backup support of incremental backup with ceph backend
Folks, I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work. Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder? I am running a Yoga release. $ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+ ### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+ ### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+ $ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+ My incremental backup still shows 10G size which should be lower compared to the first backup.
Hi Satish, Did you check the size of the actual backup file in ceph storage? It should be created in the *backups* pool[1]. Cinder shows the same size of incremental backup as a normal backup but file size should be different from the size shown in cinder DB records. Also file size of incremental backup should not be the same as the file size of full backup. [1] https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4... Thanks Rajat Dhasmana On Wed, May 17, 2023 at 12:25 AM Satish Patel <satish.txt@gmail.com> wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
Hi, I am not sure, some body have tested this solution before, But still come from a background of backup, i strongly believe that for a Software to understand the difference between incremental and Full it needs to have a agent at the client side to do a Journalling based on backup objects, I do not see thai feature is there in Ceph Regards Adivya Singh On Wed, May 17, 2023 at 2:16 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:
Hi Satish,
Did you check the size of the actual backup file in ceph storage? It should be created in the *backups* pool[1]. Cinder shows the same size of incremental backup as a normal backup but file size should be different from the size shown in cinder DB records. Also file size of incremental backup should not be the same as the file size of full backup.
[1] https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4...
Thanks Rajat Dhasmana
On Wed, May 17, 2023 at 12:25 AM Satish Patel <satish.txt@gmail.com> wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
Hi, just to visualize Rajat's response, ceph creates copy-on-write snapshots so the incremental backup doesn't really use much space. On a Victoria cloud I created one full backup of an almost empty volume (made an ext4 filesystem and mounted it, create one tiny file), then created a second tiny file and then made another incremental backup, this is what ceph sees: ceph01:~ # rbd du <backup_pool>/volume-6662f50a-a74c-47a4-8abd-a49069f3614c NAME PROVISIONED USED volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.650a4f8f-7b61-447e-9eb9-767c74b15342.snap.1684329174.8419683 5 GiB 192 MiB volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.1d358548-5d1d-4e03-9728-bb863c717910.snap.1684329450.9599462 5 GiB 16 MiB volume-6662f50a-a74c-47a4-8abd-a49069f3614c 5 GiB 0 B <TOTAL> backup.650a4f8f-7b61-447e-9eb9-767c74b15342 (using 192 MiB) is the full backup, backup.1d358548-5d1d-4e03-9728-bb863c717910 is the incremental backup (using 16 MiB). Zitat von Rajat Dhasmana <rdhasman@redhat.com>:
Hi Satish,
Did you check the size of the actual backup file in ceph storage? It should be created in the *backups* pool[1]. Cinder shows the same size of incremental backup as a normal backup but file size should be different from the size shown in cinder DB records. Also file size of incremental backup should not be the same as the file size of full backup.
[1] https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4...
Thanks Rajat Dhasmana
On Wed, May 17, 2023 at 12:25 AM Satish Patel <satish.txt@gmail.com> wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
Thank you Eugen, I am noticing similar thing what you noticed. That means cinder lying to use or doesn't know how to calculate size of copy-on-write. One more question, I have created 1G file using dd command and took incremental backup and found ceph only showing 28 MiB size of backup. Does that sound right? root@ceph1:~# rbd -p backups du NAME PROVISIONED USED volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc.snap.1684260707.1682937 10 GiB 68 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.294b58af-771b-4a9f-bb7b-c37a4f84d678.snap.1684260787.943873 10 GiB 36 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.c9652662-36bd-4e74-b822-f7ae10eb7246.snap.1684330702.6955926 10 GiB 28 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc 10 GiB 0 B On Wed, May 17, 2023 at 9:26 AM Eugen Block <eblock@nde.ag> wrote:
Hi,
just to visualize Rajat's response, ceph creates copy-on-write snapshots so the incremental backup doesn't really use much space. On a Victoria cloud I created one full backup of an almost empty volume (made an ext4 filesystem and mounted it, create one tiny file), then created a second tiny file and then made another incremental backup, this is what ceph sees:
ceph01:~ # rbd du <backup_pool>/volume-6662f50a-a74c-47a4-8abd-a49069f3614c NAME PROVISIONED USED
volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.650a4f8f-7b61-447e-9eb9-767c74b15342.snap.1684329174.8419683 5 GiB 192 MiB
volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.1d358548-5d1d-4e03-9728-bb863c717910.snap.1684329450.9599462 5 GiB 16 MiB volume-6662f50a-a74c-47a4-8abd-a49069f3614c 5 GiB 0 B <TOTAL>
backup.650a4f8f-7b61-447e-9eb9-767c74b15342 (using 192 MiB) is the full backup, backup.1d358548-5d1d-4e03-9728-bb863c717910 is the incremental backup (using 16 MiB).
Zitat von Rajat Dhasmana <rdhasman@redhat.com>:
Hi Satish,
Did you check the size of the actual backup file in ceph storage? It should be created in the *backups* pool[1]. Cinder shows the same size of incremental backup as a normal backup but file size should be different from the size shown in cinder DB records. Also file size of incremental backup should not be the same as the file size of full backup.
[1]
https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4...
Thanks Rajat Dhasmana
On Wed, May 17, 2023 at 12:25 AM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list
+--------------------------------------+------------+------------+------+-------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------+------------+------+-------------------------------------+
| 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc |
+--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol
--force
+-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+---------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+---------------------+-------------+-----------+------+
| 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 |
+--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
I don't think cinder is lying. Firstly, the "backup create" dialog states:
Backups will be the same size as the volume they originate from.
Secondly, I believe it highly depends on the actual storage backend and its implementation how to create a backup. On a different storage backend it might look completely different. We're on ceph from the beginning so I can't comment on alternatives. As for your question, how exactly did your dd command look like? Did you fill up the file with zeroes (dd if=/dev/zero)? In that case the low used bytes number would make sense, if you filled it up randomly the du size should be higher. Zitat von Satish Patel <satish.txt@gmail.com>:
Thank you Eugen,
I am noticing similar thing what you noticed. That means cinder lying to use or doesn't know how to calculate size of copy-on-write.
One more question, I have created 1G file using dd command and took incremental backup and found ceph only showing 28 MiB size of backup. Does that sound right?
root@ceph1:~# rbd -p backups du NAME PROVISIONED USED volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc.snap.1684260707.1682937 10 GiB 68 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.294b58af-771b-4a9f-bb7b-c37a4f84d678.snap.1684260787.943873 10 GiB 36 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.c9652662-36bd-4e74-b822-f7ae10eb7246.snap.1684330702.6955926 10 GiB 28 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc 10 GiB 0 B
On Wed, May 17, 2023 at 9:26 AM Eugen Block <eblock@nde.ag> wrote:
Hi,
just to visualize Rajat's response, ceph creates copy-on-write snapshots so the incremental backup doesn't really use much space. On a Victoria cloud I created one full backup of an almost empty volume (made an ext4 filesystem and mounted it, create one tiny file), then created a second tiny file and then made another incremental backup, this is what ceph sees:
ceph01:~ # rbd du <backup_pool>/volume-6662f50a-a74c-47a4-8abd-a49069f3614c NAME PROVISIONED USED
volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.650a4f8f-7b61-447e-9eb9-767c74b15342.snap.1684329174.8419683 5 GiB 192 MiB
volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.1d358548-5d1d-4e03-9728-bb863c717910.snap.1684329450.9599462 5 GiB 16 MiB volume-6662f50a-a74c-47a4-8abd-a49069f3614c 5 GiB 0 B <TOTAL>
backup.650a4f8f-7b61-447e-9eb9-767c74b15342 (using 192 MiB) is the full backup, backup.1d358548-5d1d-4e03-9728-bb863c717910 is the incremental backup (using 16 MiB).
Zitat von Rajat Dhasmana <rdhasman@redhat.com>:
Hi Satish,
Did you check the size of the actual backup file in ceph storage? It should be created in the *backups* pool[1]. Cinder shows the same size of incremental backup as a normal backup but file size should be different from the size shown in cinder DB records. Also file size of incremental backup should not be the same as the file size of full backup.
[1]
https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4...
Thanks Rajat Dhasmana
On Wed, May 17, 2023 at 12:25 AM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list
+--------------------------------------+------------+------------+------+-------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------+------------+------+-------------------------------------+
| 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc |
+--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol
--force
+-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+---------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+---------------------+-------------+-----------+------+
| 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 |
+--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
You are goddamn right :) I was using /dev/zero before. After using /dev/random I can see the correct representation 1.1 GiB size of the last incremental backup. root@ceph1:~# rbd -p backups du NAME PROVISIONED USED volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc.snap.1684260707.1682937 10 GiB 68 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.294b58af-771b-4a9f-bb7b-c37a4f84d678.snap.1684260787.943873 10 GiB 36 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.c9652662-36bd-4e74-b822-f7ae10eb7246.snap.1684330702.6955926 10 GiB 28 MiB volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.7e9a48db-b513-40d8-8018-f73ef52cb025.snap.1684331929.0653753 10 GiB 1.1 GiB This is just confusion when I look at the backup list, They all say 10G. I wish it would bring actual numbers from ceph instead of 10G. I can understand but it's hard to explain that to customers :( # openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | ec141929-cc74-459a-b8e7-03f016df9cec | spatel-vol-backup-4 | None | available | 10 | | 7e9a48db-b513-40d8-8018-f73ef52cb025 | spatel-vol-backup-3 | None | available | 10 | | c9652662-36bd-4e74-b822-f7ae10eb7246 | spatel-vol-backup-2 | None | available | 10 | | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+ On Wed, May 17, 2023 at 9:49 AM Eugen Block <eblock@nde.ag> wrote:
I don't think cinder is lying. Firstly, the "backup create" dialog states:
Backups will be the same size as the volume they originate from.
Secondly, I believe it highly depends on the actual storage backend and its implementation how to create a backup. On a different storage backend it might look completely different. We're on ceph from the beginning so I can't comment on alternatives.
As for your question, how exactly did your dd command look like? Did you fill up the file with zeroes (dd if=/dev/zero)? In that case the low used bytes number would make sense, if you filled it up randomly the du size should be higher.
Zitat von Satish Patel <satish.txt@gmail.com>:
Thank you Eugen,
I am noticing similar thing what you noticed. That means cinder lying to use or doesn't know how to calculate size of copy-on-write.
One more question, I have created 1G file using dd command and took incremental backup and found ceph only showing 28 MiB size of backup. Does that sound right?
root@ceph1:~# rbd -p backups du NAME PROVISIONED USED
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc.snap.1684260707.1682937
10 GiB 68 MiB
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.294b58af-771b-4a9f-bb7b-c37a4f84d678.snap.1684260787.943873
10 GiB 36 MiB
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc@backup.c9652662-36bd-4e74-b822-f7ae10eb7246.snap.1684330702.6955926
10 GiB 28 MiB
10 GiB 0 B
On Wed, May 17, 2023 at 9:26 AM Eugen Block <eblock@nde.ag> wrote:
Hi,
just to visualize Rajat's response, ceph creates copy-on-write snapshots so the incremental backup doesn't really use much space. On a Victoria cloud I created one full backup of an almost empty volume (made an ext4 filesystem and mounted it, create one tiny file), then created a second tiny file and then made another incremental backup, this is what ceph sees:
ceph01:~ # rbd du <backup_pool>/volume-6662f50a-a74c-47a4-8abd-a49069f3614c NAME PROVISIONED USED
volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.650a4f8f-7b61-447e-9eb9-767c74b15342.snap.1684329174.8419683
5 GiB 192 MiB
volume-6662f50a-a74c-47a4-8abd-a49069f3614c@backup.1d358548-5d1d-4e03-9728-bb863c717910.snap.1684329450.9599462
5 GiB 16 MiB volume-6662f50a-a74c-47a4-8abd-a49069f3614c 5 GiB 0 B <TOTAL>
backup.650a4f8f-7b61-447e-9eb9-767c74b15342 (using 192 MiB) is the full backup, backup.1d358548-5d1d-4e03-9728-bb863c717910 is the incremental backup (using 16 MiB).
Zitat von Rajat Dhasmana <rdhasman@redhat.com>:
Hi Satish,
Did you check the size of the actual backup file in ceph storage? It should be created in the *backups* pool[1]. Cinder shows the same size of incremental backup as a normal backup
but
file size should be different from the size shown in cinder DB records. Also file size of incremental backup should not be the same as the file size of full backup.
[1]
https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4...
Thanks Rajat Dhasmana
On Wed, May 17, 2023 at 12:25 AM Satish Patel <satish.txt@gmail.com>
wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc the
cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list
+--------------------------------------+------------+------------+------+-------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------+------------+------+-------------------------------------+
| 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc |
+--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol
--force
+-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+---------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+---------------------+-------------+-----------+------+
| 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 |
+--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself. What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least? [1] https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de... -- Masayuki Igawa On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+ My incremental backup still shows 10G size which should be lower compared to the first backup.
Thank you Masayuki, Are there any API for ceph which I can use to get real usage from ceph directly related to incremental backup usage? Do I need to configure RGW service to obtain that level of information from ceph using API? On Wed, May 17, 2023 at 10:58 PM Masayuki Igawa <masayuki.igawa@gmail.com> wrote:
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself.
What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least?
[1] https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de...
-- Masayuki Igawa
On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list
+--------------------------------------+------------+------------+------+-------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------+------------+------+-------------------------------------+
| 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc |
+--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol
--force
+-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+---------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+---------------------+-------------+-----------+------+
| 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 |
+--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
Hi,
Are there any API for ceph which I can use to get real usage from ceph directly related to incremental backup usage? Do I need to configure RGW service to obtain that level of information from ceph using API?
AFAIK, we don't have it like that in OpenStack API because API users shouldn't know its backend. If you need like that level of information, I think you need to like that if "object_count" is not sufficient for your usage. Best Regards, -- Masayuki Igawa On Sat, May 20, 2023, at 00:53, Satish Patel wrote:
Thank you Masayuki,
Are there any API for ceph which I can use to get real usage from ceph directly related to incremental backup usage? Do I need to configure RGW service to obtain that level of information from ceph using API?
On Wed, May 17, 2023 at 10:58 PM Masayuki Igawa <masayuki.igawa@gmail.com> wrote:
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself.
What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least?
[1] https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de...
-- Masayuki Igawa
On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+ My incremental backup still shows 10G size which should be lower compared to the first backup.
Hi, I don't see an object_count > 0 for all incremental backups or the full backup. I tried both with a "full" volume (from image) as well as en empty volume, put a filesystem on it and copied tiny files onto it. This is the result: controller02:~ # openstack volume backup list +--------------------------------------+--------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+--------------+-------------+-----------+------+ | a8a448e7-8bfd-46e3-81bf-3b1d607893e7 | inc-backup2 | None | available | 4 | | 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 | inc-backup1 | None | available | 4 | | 125c23cd-a5e8-4a7a-b59a-015d0bc5902c | full-backup1 | None | available | 4 | +--------------------------------------+--------------+-------------+-----------+------+ controller02:~ # for i in `openstack volume backup list -c ID -f value`; do openstack volume backup show $i -c id -c is_incremental -c object_count -f value; done a8a448e7-8bfd-46e3-81bf-3b1d607893e7 True 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 True 125c23cd-a5e8-4a7a-b59a-015d0bc5902c False This is still Victoria, though, I think I have a Wallaby test installation, I'll try that as well. In which case should object_count be > 0? All my installations have ceph as storage backend. Thanks, Eugen Zitat von Masayuki Igawa <masayuki.igawa@gmail.com>:
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself.
What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least?
[1] https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de...
-- Masayuki Igawa
On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+ My incremental backup still shows 10G size which should be lower compared to the first backup.
I see the same for Wallaby, object_count is always 0. Zitat von Eugen Block <eblock@nde.ag>:
Hi,
I don't see an object_count > 0 for all incremental backups or the full backup. I tried both with a "full" volume (from image) as well as en empty volume, put a filesystem on it and copied tiny files onto it. This is the result:
controller02:~ # openstack volume backup list +--------------------------------------+--------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+--------------+-------------+-----------+------+ | a8a448e7-8bfd-46e3-81bf-3b1d607893e7 | inc-backup2 | None | available | 4 | | 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 | inc-backup1 | None | available | 4 | | 125c23cd-a5e8-4a7a-b59a-015d0bc5902c | full-backup1 | None | available | 4 | +--------------------------------------+--------------+-------------+-----------+------+
controller02:~ # for i in `openstack volume backup list -c ID -f value`; do openstack volume backup show $i -c id -c is_incremental -c object_count -f value; done a8a448e7-8bfd-46e3-81bf-3b1d607893e7 True
3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 True
125c23cd-a5e8-4a7a-b59a-015d0bc5902c False
This is still Victoria, though, I think I have a Wallaby test installation, I'll try that as well. In which case should object_count be > 0? All my installations have ceph as storage backend.
Thanks, Eugen
Zitat von Masayuki Igawa <masayuki.igawa@gmail.com>:
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself.
What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least?
[1] https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de...
-- Masayuki Igawa
On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+ My incremental backup still shows 10G size which should be lower compared to the first backup.
I looked through the code with a colleague, apparently the code to increase object counters is not executed with ceph as backend. Is that assumption correct? Would be interesting to know for which backends that would actually increase per backup. Zitat von Eugen Block <eblock@nde.ag>:
I see the same for Wallaby, object_count is always 0.
Zitat von Eugen Block <eblock@nde.ag>:
Hi,
I don't see an object_count > 0 for all incremental backups or the full backup. I tried both with a "full" volume (from image) as well as en empty volume, put a filesystem on it and copied tiny files onto it. This is the result:
controller02:~ # openstack volume backup list +--------------------------------------+--------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+--------------+-------------+-----------+------+ | a8a448e7-8bfd-46e3-81bf-3b1d607893e7 | inc-backup2 | None | available | 4 | | 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 | inc-backup1 | None | available | 4 | | 125c23cd-a5e8-4a7a-b59a-015d0bc5902c | full-backup1 | None | available | 4 | +--------------------------------------+--------------+-------------+-----------+------+
controller02:~ # for i in `openstack volume backup list -c ID -f value`; do openstack volume backup show $i -c id -c is_incremental -c object_count -f value; done a8a448e7-8bfd-46e3-81bf-3b1d607893e7 True
3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 True
125c23cd-a5e8-4a7a-b59a-015d0bc5902c False
This is still Victoria, though, I think I have a Wallaby test installation, I'll try that as well. In which case should object_count be > 0? All my installations have ceph as storage backend.
Thanks, Eugen
Zitat von Masayuki Igawa <masayuki.igawa@gmail.com>:
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself.
What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least?
[1] https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de...
-- Masayuki Igawa
On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph support incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list +--------------------------------------+------------+------------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------------+------------+------+-------------------------------------+ | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc | +--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list +--------------------------------------+---------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------------+-------------+-----------+------+ | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 | +--------------------------------------+---------------------+-------------+-----------+------+ My incremental backup still shows 10G size which should be lower compared to the first backup.
https://web.archive.org/web/20160404120859/http://gorka.eguileor.com/inside-... On Tue, May 23, 2023 at 4:39 PM Eugen Block <eblock@nde.ag> wrote:
I looked through the code with a colleague, apparently the code to increase object counters is not executed with ceph as backend. Is that assumption correct? Would be interesting to know for which backends that would actually increase per backup.
Zitat von Eugen Block <eblock@nde.ag>:
I see the same for Wallaby, object_count is always 0.
Zitat von Eugen Block <eblock@nde.ag>:
Hi,
I don't see an object_count > 0 for all incremental backups or the full backup. I tried both with a "full" volume (from image) as well as en empty volume, put a filesystem on it and copied tiny files onto it. This is the result:
controller02:~ # openstack volume backup list
+--------------------------------------+--------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+--------------+-------------+-----------+------+
| a8a448e7-8bfd-46e3-81bf-3b1d607893e7 | inc-backup2 | None | available | 4 | | 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 | inc-backup1 | None | available | 4 | | 125c23cd-a5e8-4a7a-b59a-015d0bc5902c | full-backup1 | None | available | 4 |
+--------------------------------------+--------------+-------------+-----------+------+
controller02:~ # for i in `openstack volume backup list -c ID -f value`; do openstack volume backup show $i -c id -c is_incremental -c object_count -f value; done a8a448e7-8bfd-46e3-81bf-3b1d607893e7 True
3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 True
125c23cd-a5e8-4a7a-b59a-015d0bc5902c False
This is still Victoria, though, I think I have a Wallaby test installation, I'll try that as well. In which case should object_count be > 0? All my installations have ceph as storage backend.
Thanks, Eugen
Zitat von Masayuki Igawa <masayuki.igawa@gmail.com>:
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph
support
incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself.
What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least?
[1]
https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de...
-- Masayuki Igawa
On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph
support
incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list
+--------------------------------------+------------+------------+------+-------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------+------------+------+-------------------------------------+
| 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc |
+--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+---------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+---------------------+-------------+-----------+------+
| 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 |
+--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
-- Sofía Enriquez she/her Software Engineer Red Hat PnT <https://www.redhat.com> IRC: @enriquetaso @RedHat <https://twitter.com/redhat> Red Hat <https://www.linkedin.com/company/red-hat> Red Hat <https://www.facebook.com/RedHatInc> <https://www.redhat.com>
Thank you Sofia, that is quite helpful. Zitat von Sofia Enriquez <senrique@redhat.com>:
https://web.archive.org/web/20160404120859/http://gorka.eguileor.com/inside-...
On Tue, May 23, 2023 at 4:39 PM Eugen Block <eblock@nde.ag> wrote:
I looked through the code with a colleague, apparently the code to increase object counters is not executed with ceph as backend. Is that assumption correct? Would be interesting to know for which backends that would actually increase per backup.
Zitat von Eugen Block <eblock@nde.ag>:
I see the same for Wallaby, object_count is always 0.
Zitat von Eugen Block <eblock@nde.ag>:
Hi,
I don't see an object_count > 0 for all incremental backups or the full backup. I tried both with a "full" volume (from image) as well as en empty volume, put a filesystem on it and copied tiny files onto it. This is the result:
controller02:~ # openstack volume backup list
+--------------------------------------+--------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+--------------+-------------+-----------+------+
| a8a448e7-8bfd-46e3-81bf-3b1d607893e7 | inc-backup2 | None | available | 4 | | 3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 | inc-backup1 | None | available | 4 | | 125c23cd-a5e8-4a7a-b59a-015d0bc5902c | full-backup1 | None | available | 4 |
+--------------------------------------+--------------+-------------+-----------+------+
controller02:~ # for i in `openstack volume backup list -c ID -f value`; do openstack volume backup show $i -c id -c is_incremental -c object_count -f value; done a8a448e7-8bfd-46e3-81bf-3b1d607893e7 True
3d11faa0-d67c-432d-afb1-ff44f6a3b4a7 True
125c23cd-a5e8-4a7a-b59a-015d0bc5902c False
This is still Victoria, though, I think I have a Wallaby test installation, I'll try that as well. In which case should object_count be > 0? All my installations have ceph as storage backend.
Thanks, Eugen
Zitat von Masayuki Igawa <masayuki.igawa@gmail.com>:
Hi Satish,
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph
support
incremental backup with cinder?
IIUC, it would be expected behavior. According to the API Doc[1], "size" is "The size of the volume, in gibibytes (GiB)." So, it's not the actual size of the snapshot itself.
What about the "object_count" of "openstack volume backup show" output? The incremental's one should be zero or less than the full backup at least?
[1]
https://docs.openstack.org/api-ref/block-storage/v3/?expanded=show-backup-de...
-- Masayuki Igawa
On Wed, May 17, 2023, at 03:51, Satish Patel wrote:
Folks,
I have ceph storage for my openstack and configure cinder-volume and cinder-backup service for my disaster solution. I am trying to use the cinder-backup incremental option to save storage space but somehow It doesn't work the way it should work.
Whenever I take incremental backup it shows a similar size of original volume. Technically It should be smaller. Question is does ceph
support
incremental backup with cinder?
I am running a Yoga release.
$ openstack volume list
+--------------------------------------+------------+------------+------+-------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------+------------+------+-------------------------------------+
| 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up | 10 | Attached to spatel-foo on /dev/sdc |
+--------------------------------------+------------+------------+------+-------------------------------------+
### Create full backup $ openstack volume backup create --name spatel-vol-backup spatel-vol --force +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | | name | spatel-vol-backup | +-------+--------------------------------------+
### Create incremental $ openstack volume backup create --name spatel-vol-backup-1 --incremental --force spatel-vol +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | | name | spatel-vol-backup-1 | +-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+---------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+---------------------+-------------+-----------+------+
| 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None | available | 10 | | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available | 10 |
+--------------------------------------+---------------------+-------------+-----------+------+
My incremental backup still shows 10G size which should be lower compared to the first backup.
--
Sofía Enriquez
she/her
Software Engineer
Red Hat PnT <https://www.redhat.com>
IRC: @enriquetaso @RedHat <https://twitter.com/redhat> Red Hat <https://www.linkedin.com/company/red-hat> Red Hat <https://www.facebook.com/RedHatInc> <https://www.redhat.com>
participants (6)
-
Adivya Singh
-
Eugen Block
-
Masayuki Igawa
-
Rajat Dhasmana
-
Satish Patel
-
Sofia Enriquez