[stein][cinder][backup][swift] issue

Ignazio Cassano ignaziocassano at gmail.com
Fri Jan 3 09:28:57 UTC 2020


Thanks, Matt.
When I add
rgw keystone implicit tenants = true
container are acreated with the project id/name.
Regards
Ignazio

Il giorno gio 2 gen 2020 alle ore 23:32 Matthew Oliver <matt at oliver.net.au>
ha scritto:

> Tim, as always, has hit the nail on the head. By default rgw doesn't use
> explicit tenants.
> if you want to use RGW and explicit tenants.. ie no global container
> namespace, then you need to add:
>
>   rgw keystone implicit tenants = true
>
> To you rgw client configuration in ceph.conf.
>
> See:
> https://docs.ceph.com/docs/master/radosgw/multitenancy/#swift-with-keystone
>
> Not sure what happens to existing containers when you enable this option,
> because I think my default things are considered to be in the 'default'
> tenant.
>
> matt
>
> On Thu, Jan 2, 2020 at 5:40 PM Ignazio Cassano <ignaziocassano at gmail.com>
> wrote:
>
>> Many Thanks, Tim
>> Ignazio
>>
>> Il giorno gio 2 gen 2020 alle ore 07:10 Tim Burke <tim at swiftstack.com>
>> ha scritto:
>>
>>> Hi Ignazio,
>>>
>>> That's expected behavior with rados gateway. They follow S3's lead and
>>> have a unified namespace for containers across all tenants. From their
>>> documentation [0]:
>>>
>>>     If a container with the same name already exists, and the user is
>>>     the container owner then the operation will succeed. Otherwise the
>>>     operation will fail.
>>>
>>> FWIW, that's very much a Ceph-ism -- Swift proper allows each tenant
>>> full and independent control over their namespace.
>>>
>>> Tim
>>>
>>> [0]
>>>
>>> https://docs.ceph.com/docs/mimic/radosgw/swift/containerops/#http-response
>>>
>>> On Mon, 2019-12-30 at 15:48 +0100, Ignazio Cassano wrote:
>>> > Hello All,
>>> > I configured openstack stein on centos 7 witch ceph.
>>> > Cinder works fine and object storage on ceph seems to work fine: I
>>> > can clreate containers, volume etc .....
>>> >
>>> > I configured cinder backup on swift (but swift is using ceph rados
>>> > gateway) :
>>> >
>>> > backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
>>> > swift_catalog_info = object-store:swift:publicURL
>>> > backup_swift_enable_progress_timer = True
>>> > #backup_swift_url = http://10.102.184.190:8080/v1/AUTH_
>>> > backup_swift_auth_url = http://10.102.184.190:5000/v3
>>> > backup_swift_auth = per_user
>>> > backup_swift_auth_version = 1
>>> > backup_swift_user = admin
>>> > backup_swift_user_domain = default
>>> > #backup_swift_key = <None>
>>> > #backup_swift_container = volumebackups
>>> > backup_swift_object_size = 52428800
>>> > #backup_swift_project = <None>
>>> > #backup_swift_project_domain = <None>
>>> > backup_swift_retry_attempts = 3
>>> > backup_swift_retry_backoff = 2
>>> > backup_compression_algorithm = zlib
>>> >
>>> > If I run a backup as user admin, it creates a container named
>>> > "volumebackups".
>>> > If I run a backup as user demo and I do not specify a container name,
>>> > it tires to write on volumebackups and gives some errors:
>>> >
>>> > ClientException: Container PUT failed:
>>> >
>>> http://10.102.184.190:8080/swift/v1/AUTH_964f343cf5164028a803db91488bdb01/volumebackups
>>> > 409 Conflict   BucketAlreadyExists
>>> >
>>> >
>>> > Does it mean I cannot use the same containers name on differents
>>> > projects ?
>>> >
>>> > My ceph.conf is configured for using keystone:
>>> > [client.rgw.tst2-osctrl01]
>>> > rgw_frontends = "civetweb port=10.102.184.190:8080"
>>> > # Keystone information
>>> > rgw keystone api version = 3
>>> > rgw keystone url = http://10.102.184.190:5000
>>> > rgw keystone admin user = admin
>>> > rgw keystone admin password = password
>>> > rgw keystone admin domain = default
>>> > rgw keystone admin project = admin
>>> > rgw swift account in url = true
>>> > rgw keystone implicit tenants = true
>>> >
>>> >
>>> >
>>> > Any help, please ?
>>> > Best Regards
>>> > Ignazio
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200103/d371e972/attachment-0001.html>


More information about the openstack-discuss mailing list