[stein][cinder][backup][swift] issue

Ignazio Cassano ignaziocassano at gmail.com
Thu Jan 2 06:38:09 UTC 2020


Many Thanks, Tim
Ignazio

Il giorno gio 2 gen 2020 alle ore 07:10 Tim Burke <tim at swiftstack.com> ha
scritto:

> Hi Ignazio,
>
> That's expected behavior with rados gateway. They follow S3's lead and
> have a unified namespace for containers across all tenants. From their
> documentation [0]:
>
>     If a container with the same name already exists, and the user is
>     the container owner then the operation will succeed. Otherwise the
>     operation will fail.
>
> FWIW, that's very much a Ceph-ism -- Swift proper allows each tenant
> full and independent control over their namespace.
>
> Tim
>
> [0]
> https://docs.ceph.com/docs/mimic/radosgw/swift/containerops/#http-response
>
> On Mon, 2019-12-30 at 15:48 +0100, Ignazio Cassano wrote:
> > Hello All,
> > I configured openstack stein on centos 7 witch ceph.
> > Cinder works fine and object storage on ceph seems to work fine: I
> > can clreate containers, volume etc .....
> >
> > I configured cinder backup on swift (but swift is using ceph rados
> > gateway) :
> >
> > backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
> > swift_catalog_info = object-store:swift:publicURL
> > backup_swift_enable_progress_timer = True
> > #backup_swift_url = http://10.102.184.190:8080/v1/AUTH_
> > backup_swift_auth_url = http://10.102.184.190:5000/v3
> > backup_swift_auth = per_user
> > backup_swift_auth_version = 1
> > backup_swift_user = admin
> > backup_swift_user_domain = default
> > #backup_swift_key = <None>
> > #backup_swift_container = volumebackups
> > backup_swift_object_size = 52428800
> > #backup_swift_project = <None>
> > #backup_swift_project_domain = <None>
> > backup_swift_retry_attempts = 3
> > backup_swift_retry_backoff = 2
> > backup_compression_algorithm = zlib
> >
> > If I run a backup as user admin, it creates a container named
> > "volumebackups".
> > If I run a backup as user demo and I do not specify a container name,
> > it tires to write on volumebackups and gives some errors:
> >
> > ClientException: Container PUT failed:
> >
> http://10.102.184.190:8080/swift/v1/AUTH_964f343cf5164028a803db91488bdb01/volumebackups
> > 409 Conflict   BucketAlreadyExists
> >
> >
> > Does it mean I cannot use the same containers name on differents
> > projects ?
> >
> > My ceph.conf is configured for using keystone:
> > [client.rgw.tst2-osctrl01]
> > rgw_frontends = "civetweb port=10.102.184.190:8080"
> > # Keystone information
> > rgw keystone api version = 3
> > rgw keystone url = http://10.102.184.190:5000
> > rgw keystone admin user = admin
> > rgw keystone admin password = password
> > rgw keystone admin domain = default
> > rgw keystone admin project = admin
> > rgw swift account in url = true
> > rgw keystone implicit tenants = true
> >
> >
> >
> > Any help, please ?
> > Best Regards
> > Ignazio
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200102/51bf79d5/attachment.html>


More information about the openstack-discuss mailing list