greetings, What object storage system to choose for integration with openstack environment? ceph or swift? Thank you.
Dell Customer Communication - Confidential Other one will work. First define what criteria you have. -----Original Message----- From: Tessa Plum <tessa@plum.ovh> Sent: Friday, April 3, 2020 8:47 PM To: openstack-discuss@lists.openstack.org Subject: choosing object storage [EXTERNAL EMAIL] greetings, What object storage system to choose for integration with openstack environment? ceph or swift? Thank you.
There are a number of considerations (disclaimer we run Ceph block and Swift object storage): Purely on a level of simplicity, Swift is easier to set up. However, if you are already using Ceph for block storage then it makes sense to keep using it for object too (since you are likely to be expert at Ceph at this point). On the other hand, if you have multiple Ceph clusters and want a geo replicated object storage solution, then doing this with Swift is much easier than with Ceph (geo replicated RGW still looks to be real complex to set up - a long page of archane commands). Finally (this is my 'big deal point'). I'd like my block and object storage to be completely independent - suppose a situation nukes my block storage (Ceph) - if my object storage is Swift then people's backups etc are still viable and when the Ceph cluster is rebuilt we can restore and continue. On the other hand If your object storage is Ceph too then.... regards Mark On 4/04/20 2:47 pm, Tessa Plum wrote:
greetings,
What object storage system to choose for integration with openstack environment? ceph or swift?
Thank you.
On Mon, 6 Apr 2020 18:47:36 +1200 Mark Kirkwood <mark.kirkwood@catalyst.net.nz> wrote:
There are a number of considerations (disclaimer we run Ceph block and Swift object storage): [...]
Mark, could you provide some numbers for cluster size, number of objects and the rate of change for both Ceph and Swift? I imagine some of it may be proprietary, but perhaps the rate of ingestion is available? E.g. Swift is NN% today, Ceph is MM%, rate of growth is XX%? Last time we have Summit presentation on the topic, it was by San Diego Supercomputing Center. Last time I saw anyone publish their Swift data at all, it was Turkcell who had a 36PB cluster and planned to grow it to 50PB by end of 2019. They started that cluster in Icehouse release with 250GB drives! Thanks, -- Pete
Pete Zaitcev wrote:
Last time I saw anyone publish their Swift data at all, it was Turkcell who had a 36PB cluster and planned to grow it to 50PB by end of 2019. They started that cluster in Icehouse release with 250GB drives!
From my experience, for pure object storage requirement, swift is much simpler than Ceph, either deployment or operations. We have Ceph as block storage only, but have swift as object storage for S3 access etc. We have 500TB data stored in Swift. Regards.
Wesley Peng wrote:
Pete Zaitcev wrote:
Last time I saw anyone publish their Swift data at all, it was Turkcell who had a 36PB cluster and planned to grow it to 50PB by end of 2019. They started that cluster in Icehouse release with 250GB drives!
From my experience, for pure object storage requirement, swift is much simpler than Ceph, either deployment or operations.
We have Ceph as block storage only, but have swift as object storage for S3 access etc. We have 500TB data stored in Swift.
Regards.
BTW, we have swift and ceph deployed separated, but we didn't have openstack in our environment. We use the cloud architecture developed by ourselves. Ceph and Swift were used for storage products, they are good enough. regards.
participants (5)
-
Arkady.Kanevsky@dell.com
-
Mark Kirkwood
-
Pete Zaitcev
-
Tessa Plum
-
Wesley Peng