[cinder-backup][ceph] cinder-backup support of incremental backup with ceph backend

Satish Patel satish.txt at gmail.com
Wed May 17 14:04:19 UTC 2023


You are goddamn right :)

I was using /dev/zero before. After using /dev/random I can see the correct
representation 1.1 GiB size of the last incremental backup.

root at ceph1:~# rbd -p backups du NAME PROVISIONED USED
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc at backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc.snap.1684260707.1682937
10 GiB 68 MiB
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc at backup.294b58af-771b-4a9f-bb7b-c37a4f84d678.snap.1684260787.943873
10 GiB 36 MiB
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc at backup.c9652662-36bd-4e74-b822-f7ae10eb7246.snap.1684330702.6955926
10 GiB 28 MiB
volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc at backup.7e9a48db-b513-40d8-8018-f73ef52cb025.snap.1684331929.0653753
10 GiB 1.1 GiB

This is just confusion when I look at the backup list, They all say 10G. I
wish it would bring actual numbers from ceph instead of 10G. I can
understand but it's hard to explain that to customers :(

# openstack volume backup list
+--------------------------------------+---------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+---------------------+-------------+-----------+------+
| ec141929-cc74-459a-b8e7-03f016df9cec | spatel-vol-backup-4 | None |
available | 10 | | 7e9a48db-b513-40d8-8018-f73ef52cb025 |
spatel-vol-backup-3 | None | available | 10 | |
c9652662-36bd-4e74-b822-f7ae10eb7246 | spatel-vol-backup-2 | None |
available | 10 | | 294b58af-771b-4a9f-bb7b-c37a4f84d678 |
spatel-vol-backup-1 | None | available | 10 | |
4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup | None | available
| 10 |
+--------------------------------------+---------------------+-------------+-----------+------+



On Wed, May 17, 2023 at 9:49 AM Eugen Block <eblock at nde.ag> wrote:

> I don't think cinder is lying. Firstly, the "backup create" dialog states:
>
> > Backups will be the same size as the volume they originate from.
>
> Secondly, I believe it highly depends on the actual storage backend
> and its implementation how to create a backup. On a different storage
> backend it might look completely different. We're on ceph from the
> beginning so I can't comment on alternatives.
>
> As for your question, how exactly did your dd command look like? Did
> you fill up the file with zeroes (dd if=/dev/zero)? In that case the
> low used bytes number would make sense, if you filled it up randomly
> the du size should be higher.
>
> Zitat von Satish Patel <satish.txt at gmail.com>:
>
> > Thank you Eugen,
> >
> > I am noticing similar thing what you noticed. That means cinder lying to
> > use or doesn't know how to calculate size of copy-on-write.
> >
> > One more question, I have created 1G file using dd command and took
> > incremental backup and found ceph only showing 28 MiB size of backup.
> Does
> > that sound right?
> >
> >
> > root at ceph1:~# rbd -p backups du NAME PROVISIONED USED
> >
> volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc at backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc.snap.1684260707.1682937
> > 10 GiB 68 MiB
> >
> volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc at backup.294b58af-771b-4a9f-bb7b-c37a4f84d678.snap.1684260787.943873
> > 10 GiB 36 MiB
> >
> volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc at backup.c9652662-36bd-4e74-b822-f7ae10eb7246.snap.1684330702.6955926
> > 10 GiB 28 MiB
> >
> volume-285a49a6-0e03-49e5-abf1-1c1efbfeb5f2.backup.4351d9d3-85fa-4cd5-b21d-619b3385aefc
> > 10 GiB 0 B
> >
> > On Wed, May 17, 2023 at 9:26 AM Eugen Block <eblock at nde.ag> wrote:
> >
> >> Hi,
> >>
> >> just to visualize Rajat's response, ceph creates copy-on-write
> >> snapshots so the incremental backup doesn't really use much space. On
> >> a Victoria cloud I created one full backup of an almost empty volume
> >> (made an ext4 filesystem and mounted it, create one tiny file), then
> >> created a second tiny file and then made another incremental backup,
> >> this is what ceph sees:
> >>
> >> ceph01:~ # rbd du
> <backup_pool>/volume-6662f50a-a74c-47a4-8abd-a49069f3614c
> >> NAME
> >>                                            PROVISIONED  USED
> >>
> >>
> volume-6662f50a-a74c-47a4-8abd-a49069f3614c at backup.650a4f8f-7b61-447e-9eb9-767c74b15342.snap.1684329174.8419683
> >>       5 GiB  192
> >> MiB
> >>
> >>
> volume-6662f50a-a74c-47a4-8abd-a49069f3614c at backup.1d358548-5d1d-4e03-9728-bb863c717910.snap.1684329450.9599462
> >>       5 GiB   16
> >> MiB
> >> volume-6662f50a-a74c-47a4-8abd-a49069f3614c
> >>                                                  5 GiB      0 B
> >> <TOTAL>
> >>
> >> backup.650a4f8f-7b61-447e-9eb9-767c74b15342 (using 192 MiB) is the
> >> full backup, backup.1d358548-5d1d-4e03-9728-bb863c717910 is the
> >> incremental backup (using 16 MiB).
> >>
> >> Zitat von Rajat Dhasmana <rdhasman at redhat.com>:
> >>
> >> > Hi Satish,
> >> >
> >> > Did you check the size of the actual backup file in ceph storage? It
> >> should
> >> > be created in the *backups* pool[1].
> >> > Cinder shows the same size of incremental backup as a normal backup
> but
> >> > file size should be different from
> >> > the size shown in cinder DB records. Also file size of incremental
> backup
> >> > should not be the same as the file size of full backup.
> >> >
> >> > [1]
> >> >
> >>
> https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4248eaaa/lib/cinder_backups/ceph#L22
> >> >
> >> > Thanks
> >> > Rajat Dhasmana
> >> >
> >> > On Wed, May 17, 2023 at 12:25 AM Satish Patel <satish.txt at gmail.com>
> >> wrote:
> >> >
> >> >> Folks,
> >> >>
> >> >> I have ceph storage for my openstack and configure cinder-volume and
> >> >> cinder-backup service for my disaster solution. I am trying to use
> the
> >> >> cinder-backup incremental option to save storage space but somehow It
> >> >> doesn't work the way it should work.
> >> >>
> >> >> Whenever I take incremental backup it shows a similar size of
> original
> >> >> volume. Technically It should be smaller. Question is does ceph
> >> >> support incremental backup with cinder?
> >> >>
> >> >> I am running a Yoga release.
> >> >>
> >> >> $ openstack volume list
> >> >>
> >>
> +--------------------------------------+------------+------------+------+-------------------------------------+
> >> >> | ID                                   | Name       | Status     |
> >> >> Size | Attached to                         |
> >> >>
> >>
> +--------------------------------------+------------+------------+------+-------------------------------------+
> >> >> | 285a49a6-0e03-49e5-abf1-1c1efbfeb5f2 | spatel-vol | backing-up |
> >> >>  10 | Attached to spatel-foo on /dev/sdc  |
> >> >>
> >>
> +--------------------------------------+------------+------------+------+-------------------------------------+
> >> >>
> >> >> ### Create full backup
> >> >> $ openstack volume backup create --name spatel-vol-backup spatel-vol
> >> --force
> >> >> +-------+--------------------------------------+
> >> >> | Field | Value                                |
> >> >> +-------+--------------------------------------+
> >> >> | id    | 4351d9d3-85fa-4cd5-b21d-619b3385aefc |
> >> >> | name  | spatel-vol-backup                    |
> >> >> +-------+--------------------------------------+
> >> >>
> >> >> ### Create incremental
> >> >> $ openstack volume backup create --name spatel-vol-backup-1
> >> >> --incremental --force spatel-vol
> >> >> +-------+--------------------------------------+
> >> >> | Field | Value                                |
> >> >> +-------+--------------------------------------+
> >> >> | id    | 294b58af-771b-4a9f-bb7b-c37a4f84d678 |
> >> >> | name  | spatel-vol-backup-1                  |
> >> >> +-------+--------------------------------------+
> >> >>
> >> >> $ openstack volume backup list
> >> >>
> >>
> +--------------------------------------+---------------------+-------------+-----------+------+
> >> >> | ID                                   | Name                |
> >> >> Description | Status    | Size |
> >> >>
> >>
> +--------------------------------------+---------------------+-------------+-----------+------+
> >> >> | 294b58af-771b-4a9f-bb7b-c37a4f84d678 | spatel-vol-backup-1 | None
> >> >>        | available |   10 |
> >> >> | 4351d9d3-85fa-4cd5-b21d-619b3385aefc | spatel-vol-backup   | None
> >> >>        | available |   10 |
> >> >>
> >>
> +--------------------------------------+---------------------+-------------+-----------+------+
> >> >>
> >> >>
> >> >> My incremental backup still shows 10G size which should be lower
> >> >> compared to the first backup.
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
> >>
> >>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230517/910d1023/attachment-0001.htm>


More information about the openstack-discuss mailing list