[Solved][Kolla-ansible][Xena][Ceph-RGW] need help configuring Ceph RGW for Swift and S3 access

wodel youchi wodel.youchi at gmail.com
Tue Apr 19 15:10:00 UTC 2022


Hi,

Many thanks, after changing the endpoints it worked.
So the question is why kolla-ansible did not create the correct urls? Did I
miss something?

Regards.

Le mar. 19 avr. 2022 à 11:09, wodel youchi <wodel.youchi at gmail.com> a
écrit :

> Hi,
> Thanks.
>
> The endpoints were created by Kolla-ansible upon deployment.
>
> I did configure kolla-ansible to enable cross project tenant access by
> using :
> *ceph_rgw_swift_account_in_url: true*
>
> And I did add the *rgw_swift_account_in_url = true* in ceph.conf in the
> Rados servers. But the endpoints were created by kolla.
>
> I will modify them and try again.
>
> Regards.
>
> Le mar. 19 avr. 2022 à 08:12, Buddhika S. Godakuru - University of
> Kelaniya <bsanjeewa at kln.ac.lk> a écrit :
>
>> Dear Wodel,
>> I think that default endpoint for swift when using cephrgw is /swift/v1
>> (unless you have changed it in ceph),
>> so your endpoints should be
>> | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift        |
>> object-store    | True    | admin     |
>> https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s |
>> | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift        |
>> object-store    | True    | internal  |
>> https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s |
>> | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift        |
>> object-store    | True    | public    |
>> https://dash.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s    |
>>
>>
>> See
>> https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access
>>
>> On Mon, 18 Apr 2022 at 23:52, wodel youchi <wodel.youchi at gmail.com>
>> wrote:
>>
>>> Hi,
>>> I am having trouble configuring Openstack to use Ceph RGW as the Object
>>> store backend for Swift and S3.
>>>
>>> My setup is an HCI, I have 3 controllers which are also my ceph mgrs,
>>> mons and rgws and 9 compte/storage servers (osds).
>>> Xena is deployed with Ceph Pacific.
>>>
>>> Ceph public network is a private network on vlan10 with 10.10.1.0/24 as
>>> a subnet.
>>>
>>> Here is a snippet from my globals.yml :
>>>
>>>> ---
>>>> kolla_base_distro: "centos"
>>>> kolla_install_type: "source"
>>>> openstack_release: "xena"
>>>> kolla_internal_vip_address: "10.10.3.1"
>>>> kolla_internal_fqdn: "dashint.cloud.example.com"
>>>> kolla_external_vip_address: "x.x.x.x"
>>>> kolla_external_fqdn: "dash.cloud.example.com "
>>>> docker_registry: 192.168.1.16:4000
>>>> network_interface: "bond0"
>>>> kolla_external_vip_interface: "bond1"
>>>> api_interface: "bond1.30"
>>>> *storage_interface: "bond1.10"       <---------------- VLAN10 (public
>>>> ceph network)*
>>>> tunnel_interface: "bond1.40"
>>>> dns_interface: "bond1"
>>>> octavia_network_interface: "bond1.301"
>>>> neutron_external_interface: "bond2"
>>>> neutron_plugin_agent: "openvswitch"
>>>> keepalived_virtual_router_id: "51"
>>>> kolla_enable_tls_internal: "yes"
>>>> kolla_enable_tls_external: "yes"
>>>> kolla_certificates_dir: "{{ node_config }}/certificates"
>>>> kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem"
>>>> kolla_internal_fqdn_cert: "{{ kolla_certificates_dir
>>>> }}/haproxy-internal.pem"
>>>> kolla_admin_openrc_cacert: "{{ kolla_certificates_dir }}/ca.pem"
>>>> kolla_copy_ca_into_containers: "yes"
>>>> kolla_enable_tls_backend: "yes"
>>>> kolla_verify_tls_backend: "no"
>>>> kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem"
>>>> kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem"
>>>> enable_openstack_core: "yes"
>>>> enable_hacluster: "yes"
>>>> enable_haproxy: "yes"
>>>> enable_aodh: "yes"
>>>> enable_barbican: "yes"
>>>> enable_ceilometer: "yes"
>>>> enable_central_logging: "yes"
>>>>
>>>> *enable_ceph_rgw: "yes"enable_ceph_rgw_loadbalancer: "{{
>>>> enable_ceph_rgw | bool }}"*
>>>> enable_cinder: "yes"
>>>> enable_cinder_backup: "yes"
>>>> enable_collectd: "yes"
>>>> enable_designate: "yes"
>>>> enable_elasticsearch_curator: "yes"
>>>> enable_freezer: "no"
>>>> enable_gnocchi: "yes"
>>>> enable_gnocchi_statsd: "yes"
>>>> enable_magnum: "yes"
>>>> enable_manila: "yes"
>>>> enable_manila_backend_cephfs_native: "yes"
>>>> enable_mariabackup: "yes"
>>>> enable_masakari: "yes"
>>>> enable_neutron_vpnaas: "yes"
>>>> enable_neutron_qos: "yes"
>>>> enable_neutron_agent_ha: "yes"
>>>> enable_neutron_provider_networks: "yes"
>>>> enable_neutron_segments: "yes"
>>>> enable_octavia: "yes"
>>>> enable_trove: "yes"
>>>> external_ceph_cephx_enabled: "yes"
>>>> ceph_glance_keyring: "ceph.client.glance.keyring"
>>>> ceph_glance_user: "glance"
>>>> ceph_glance_pool_name: "images"
>>>> ceph_cinder_keyring: "ceph.client.cinder.keyring"
>>>> ceph_cinder_user: "cinder"
>>>> ceph_cinder_pool_name: "volumes"
>>>> ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring"
>>>> ceph_cinder_backup_user: "cinder-backup"
>>>> ceph_cinder_backup_pool_name: "backups"
>>>> ceph_nova_keyring: "{{ ceph_cinder_keyring }}"
>>>> ceph_nova_user: "cinder"
>>>> ceph_nova_pool_name: "vms"
>>>> ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring"
>>>> ceph_gnocchi_user: "gnocchi"
>>>> ceph_gnocchi_pool_name: "metrics"
>>>> ceph_manila_keyring: "ceph.client.manila.keyring"
>>>> ceph_manila_user: "manila"
>>>> glance_backend_ceph: "yes"
>>>> glance_backend_file: "no"
>>>> gnocchi_backend_storage: "ceph"
>>>> cinder_backend_ceph: "yes"
>>>> cinder_backup_driver: "ceph"
>>>> cloudkitty_collector_backend: "gnocchi"
>>>> designate_ns_record: "cloud.example.com "
>>>> nova_backend_ceph: "yes"
>>>> nova_compute_virt_type: "kvm"
>>>> octavia_auto_configure: yes
>>>> octavia_amp_flavor:
>>>>   name: "amphora"
>>>>   is_public: no
>>>>   vcpus: 1
>>>>   ram: 1024
>>>>   disk: 5
>>>> octavia_amp_network:
>>>>   name: lb-mgmt-net
>>>>   provider_network_type: vlan
>>>>   provider_segmentation_id: 301
>>>>   provider_physical_network: physnet1
>>>>   external: false
>>>>   shared: false
>>>>   subnet:
>>>>     name: lb-mgmt-subnet
>>>>     cidr: "10.7.0.0/16"
>>>>     allocation_pool_start: "10.7.0.50"
>>>>     allocation_pool_end: "10.7.255.200"
>>>>     no_gateway_ip: yes
>>>>     enable_dhcp: yes
>>>>     mtu: 9000
>>>> octavia_amp_network_cidr: 10.10.7.0/24
>>>> octavia_amp_image_tag: "amphora"
>>>> octavia_certs_country: XZ
>>>> octavia_certs_state: Gotham
>>>> octavia_certs_organization: WAYNE
>>>> octavia_certs_organizational_unit: IT
>>>> horizon_keystone_multidomain: true
>>>> elasticsearch_curator_dry_run: "no"
>>>> enable_cluster_user_trust: true
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *ceph_rgw_hosts:        - host: controllera          ip: 10.10.1.5
>>>>     port: 8080        - host: controllerb          ip: 10.10.1.9
>>>> port: 8080        - host: controllerc          ip: 10.10.1.13
>>>> port: 8080ceph_rgw_swift_account_in_url: trueceph_rgw_swift_compatibility:
>>>> true*
>>>
>>>
>>>
>>> And Here is my ceph all.yml file
>>>
>>>> ---
>>>> dummy:
>>>> ceph_release_num: 16
>>>> cluster: ceph
>>>> configure_firewall: False
>>>> *monitor_interface: bond1.10*
>>>> monitor_address_block: 10.10.1.0/24
>>>> is_hci: true
>>>> hci_safety_factor: 0.2
>>>> osd_memory_target: 4294967296
>>>> *public_network: 10.10.1.0/24 <http://10.10.1.0/24>*
>>>> cluster_network: 10.10.2.0/24
>>>> *radosgw_interface: "{{ monitor_interface }}"*
>>>> *radosgw_address_block: 10.10.1.0/24 <http://10.10.1.0/24>*
>>>> nfs_file_gw: true
>>>> nfs_obj_gw: true
>>>> ceph_docker_image: "ceph/daemon"
>>>> ceph_docker_image_tag: latest-pacific
>>>> ceph_docker_registry: 192.168.1.16:4000
>>>> containerized_deployment: True
>>>> openstack_config: true
>>>> openstack_glance_pool:
>>>>   name: "images"
>>>>   pg_autoscale_mode: False
>>>>   application: "rbd"
>>>>   pg_num: 128
>>>>   pgp_num: 128
>>>>   target_size_ratio: 5.00
>>>>   rule_name: "SSD"
>>>> openstack_cinder_pool:
>>>>   name: "volumes"
>>>>   pg_autoscale_mode: False
>>>>   application: "rbd"
>>>>   pg_num: 1024
>>>>   pgp_num: 1024
>>>>   target_size_ratio: 42.80
>>>>   rule_name: "SSD"
>>>> openstack_nova_pool:
>>>>   name: "vms"
>>>>   pg_autoscale_mode: False
>>>>   application: "rbd"
>>>>   pg_num: 256
>>>>   pgp_num: 256
>>>>   target_size_ratio: 10.00
>>>>   rule_name: "SSD"
>>>> openstack_cinder_backup_pool:
>>>>   name: "backups"
>>>>   pg_autoscale_mode: False
>>>>   application: "rbd"
>>>>   pg_num: 512
>>>>   pgp_num: 512
>>>>   target_size_ratio: 18.00
>>>>   rule_name: "SSD"
>>>> openstack_gnocchi_pool:
>>>>   name: "metrics"
>>>>   pg_autoscale_mode: False
>>>>   application: "rbd"
>>>>   pg_num: 32
>>>>   pgp_num: 32
>>>>   target_size_ratio: 0.10
>>>>   rule_name: "SSD"
>>>> openstack_cephfs_data_pool:
>>>>   name: "cephfs_data"
>>>>   pg_autoscale_mode: False
>>>>   application: "cephfs"
>>>>   pg_num: 256
>>>>   pgp_num: 256
>>>>   target_size_ratio: 10.00
>>>>   rule_name: "SSD"
>>>> openstack_cephfs_metadata_pool:
>>>>   name: "cephfs_metadata"
>>>>   pg_autoscale_mode: False
>>>>   application: "cephfs"
>>>>   pg_num: 32
>>>>   pgp_num: 32
>>>>   target_size_ratio: 0.10
>>>>   rule_name: "SSD"
>>>> openstack_pools:
>>>>   - "{{ openstack_glance_pool }}"
>>>>   - "{{ openstack_cinder_pool }}"
>>>>   - "{{ openstack_nova_pool }}"
>>>>   - "{{ openstack_cinder_backup_pool }}"
>>>>   - "{{ openstack_gnocchi_pool }}"
>>>>   - "{{ openstack_cephfs_data_pool }}"
>>>>   - "{{ openstack_cephfs_metadata_pool }}"
>>>> openstack_keys:
>>>>   - { name: client.glance, caps: { mon: "profile rbd", osd: "profile
>>>> rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{
>>>> openstack_glance_pool.name }}"}, mode: "0600" }
>>>>   - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile
>>>> rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{
>>>> openstack_nova_pool.name }}, profile rbd pool={{
>>>> openstack_glance_pool.name }}"}, mode: "0600" }
>>>>   - { name: client.cinder-backup, caps: { mon: "profile rbd", osd:
>>>> "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode:
>>>> "0600" }
>>>>   - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile
>>>> rbd pool={{ openstack_gnocchi_pool.name }}"}, mode: "0600", }
>>>>   - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile
>>>> rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{
>>>> openstack_nova_pool.name }}, profile rbd pool={{
>>>> openstack_cinder_pool.name }}, profile rbd pool={{
>>>> openstack_cinder_backup_pool.name }}"}, mode: "0600" }
>>>> dashboard_enabled: True
>>>> dashboard_protocol: https
>>>> dashboard_port: 8443
>>>> dashboard_network: "192.168.1.0/24"
>>>> dashboard_admin_user: admin
>>>> dashboard_admin_user_ro: true
>>>> dashboard_admin_password: ***********
>>>> dashboard_crt: '/home/deployer/work/site-central/chaininv.crt'
>>>> dashboard_key: '/home/deployer/work/site-central/cloud_example.com.priv'
>>>> dashboard_grafana_api_no_ssl_verify: true
>>>> dashboard_rgw_api_user_id: admin
>>>> dashboard_rgw_api_no_ssl_verify: true
>>>> dashboard_frontend_vip: '192.168.1.5'
>>>> node_exporter_container_image: "
>>>> 192.168.1.16:4000/prom/node-exporter:v0.17.0"
>>>> grafana_admin_user: admin
>>>> grafana_admin_password: *********
>>>> grafana_crt: '/home/deployer/work/site-central/chaininv.crt'
>>>> grafana_key: '/home/deployer/work/site-central/cloud_example.com.priv'
>>>> grafana_server_fqdn: 'grafanasrv.cloud.example.com'
>>>> grafana_container_image: "192.168.1.16:4000/grafana/grafana:6.7.4"
>>>> grafana_dashboard_version: pacific
>>>> prometheus_container_image: "192.168.1.16:4000/prom/prometheus:v2.7.2"
>>>> alertmanager_container_image: "
>>>> 192.168.1.16:4000/prom/alertmanager:v0.16.2"
>>>>
>>>
>>> And my rgws.yml
>>>
>>>> ---
>>>> dummy:
>>>> copy_admin_key: true
>>>> rgw_create_pools:
>>>>   "{{ rgw_zone }}.rgw.buckets.data":
>>>>     pg_num: 256
>>>>     pgp_num: 256
>>>>     size: 3
>>>>     type: replicated
>>>>     pg_autoscale_mode: False
>>>>     rule_id: 1
>>>>   "{{ rgw_zone }}.rgw.buckets.index":
>>>>     pg_num: 64
>>>>     pgp_num: 64
>>>>     size: 3
>>>>     type: replicated
>>>>     pg_autoscale_mode: False
>>>>     rule_id: 1
>>>>   "{{ rgw_zone }}.rgw.meta":
>>>>     pg_num: 32
>>>>     pgp_num: 32
>>>>     size: 3
>>>>     type: replicated
>>>>     pg_autoscale_mode: False
>>>>     rule_id: 1
>>>>   "{{ rgw_zone }}.rgw.log":
>>>>     pg_num: 32
>>>>     pgp_num: 32
>>>>     size: 3
>>>>     type: replicated
>>>>     pg_autoscale_mode: False
>>>>     rule_id: 1
>>>>   "{{ rgw_zone }}.rgw.control":
>>>>     pg_num: 32
>>>>     pgp_num: 32
>>>>     size: 3
>>>>     type: replicated
>>>>     pg_autoscale_mode: False
>>>>     rule_id: 1
>>>>
>>>
>>> The ceph_rgw user was created by kolla
>>> (xenavenv) [deployer at rscdeployer ~]$ openstack user list | grep ceph
>>> | 3262aa7e03ab49c8a5710dfe3b16a136 | ceph_rgw
>>>
>>> This is my ceph.conf from one of my controllers :
>>>
>>>> [root at controllera ~]# cat /etc/ceph/ceph.conf
>>>> [client.rgw.controllera.rgw0]
>>>> host = controllera
>>>> rgw_keystone_url = https://dash.cloud.example.com:5000
>>>> ##Authentication using username, password and tenant. Preferred.
>>>> rgw_keystone_verify_ssl = false
>>>> rgw_keystone_api_version = 3
>>>> rgw_keystone_admin_user = ceph_rgw
>>>> rgw_keystone_admin_password =
>>>> cos2Jcnpnw9BhGwvPm**************************
>>>> rgw_keystone_admin_domain = Default
>>>> rgw_keystone_admin_project = service
>>>> rgw_s3_auth_use_keystone = true
>>>> rgw_keystone_accepted_roles = admin
>>>> rgw_keystone_implicit_tenants = true
>>>> rgw_swift_account_in_url = true
>>>> keyring = /var/lib/ceph/radosgw/ceph-rgw.controllera.rgw0/keyring
>>>> log file = /var/log/ceph/ceph-rgw-controllera.rgw0.log
>>>> rgw frontends = beast endpoint=10.10.1.5:8080
>>>> rgw thread pool size = 512
>>>> #For Debug
>>>> debug ms = 1
>>>> debug rgw = 20
>>>>
>>>>
>>>> # Please do not change this file directly since it is managed by
>>>> Ansible and will be overwritten
>>>> [global]
>>>> cluster network = 10.10.2.0/24
>>>> fsid = da094354-6ade-415a-a424-************
>>>> mon host = [v2:10.10.1.5:3300,v1:10.10.1.5:6789],[v2:10.10.1.9:3300,v1:
>>>> 10.10.1.9:6789],[v2:10.10.1.13:3300,v1:10.10.1.13:6789]
>>>> mon initial members = controllera,controllerb,controllerc
>>>> osd pool default crush rule = 1
>>>> *public network = 10.10.1.0/24 <http://10.10.1.0/24>*
>>>>
>>>
>>>
>>> Here are my swift endpoints
>>> (xenavenv) [deployer at rscdeployer ~]$ openstack endpoint list | grep
>>> swift
>>> | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift        |
>>> object-store    | True    | admin     |
>>> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s |
>>> | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift        |
>>> object-store    | True    | internal  |
>>> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s |
>>> | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift        |
>>> object-store    | True    | public    |
>>> https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s    |
>>>
>>> When I connect to Horizon -> Project -> Object Store -> Containers I get
>>> theses errors :
>>>
>>>    - Unable to get the swift container listing
>>>    - Unable to fetch the policy details.
>>>
>>> I cannot create a new container from the WebUI, the Storage policy
>>> parameter is empty.
>>> If I try to create a new container from the CLI, I get this :
>>>  (xenavenv) [deployer at rscdeployer ~]$ source cephrgw-openrc.sh
>>> (xenavenv) [deployer at rscdeployer ~]$ openstack container create demo -v
>>> START with options: container create demo -v
>>> command: container create ->
>>> openstackclient.object.v1.container.CreateContainer (auth=True)
>>> Using auth plugin: password
>>> Not Found (HTTP 404)
>>> END return value: 1
>>>
>>>
>>> This is the log from RGW service when I execute the above command :
>>>
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 CONTENT_LENGTH=0
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT=*/*
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT_ENCODING=gzip,
>>>> deflate
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_HOST=
>>>> dashint.cloud.example.com:6780
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20
>>>> HTTP_USER_AGENT=openstacksdk/0.59.0 keystoneauth1/4.4.0
>>>> python-requests/2.26.0 CPython/3.8.8
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_VERSION=1.1
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20
>>>> HTTP_X_AUTH_TOKEN=gAAAAABiXUrjDFNzXx03mt1lbpUiCqNND1HACspSfg6h_TMxKYND5Hb9BO3FxH0a7CYoBXgRJywGszlK8cl-7zbUNRjHmxgIzmyh-CrWyGv793ZLOAmT_XShcrIKThjIIH3gTxYoX1TXwOKbsvMuZnI5EKKsol2y2MhcqPLeLGc28_AwoOr_b80
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20
>>>> HTTP_X_FORWARDED_FOR=10.10.3.16
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20
>>>> HTTP_X_FORWARDED_PROTO=https
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REMOTE_ADDR=10.10.1.13
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_METHOD=PUT
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20
>>>> REQUEST_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20
>>>> SCRIPT_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SERVER_PORT=8080
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700  1 ====== starting new
>>>> request req=0x7f23221aa620 =====
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700  2 req 728157015944164764
>>>> 0.000000000s initializing for trans_id =
>>>> tx000000a1aeef2b40f759c-00625d4ae3-4b389-default
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764
>>>> 0.000000000s rgw api priority: s3=8 s3website=7
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764
>>>> 0.000000000s host=dashint.cloud.example.com
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764
>>>> 0.000000000s subdomain= domain= in_hosted_domain=0
>>>> in_hosted_domain_s3website=0
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764
>>>> 0.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0
>>>> in_hosted_domain_s3website=0 s->info.domain=
>>>> s->info.request_uri=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764
>>>> 0.000000000s get_handler handler=22RGWHandler_REST_Obj_S3
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764
>>>> 0.000000000s handler=22RGWHandler_REST_Obj_S3
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700  2 req 728157015944164764
>>>> 0.000000000s getting op 1
>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700  1 -- 10.10.1.13:0/2715436964
>>>> --> [v2:10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815] --
>>>> osd_op(unknown.0.0:1516 12.3 12:c14cb721:::script.prerequest.:head [call
>>>> version.read in=11b,getxattrs,stat] snapc 0=[]
>>>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2c400 con
>>>> 0x56055e53b000
>>>> 2022-04-18T12:26:27.996+0100 7f230d002700  1 -- 10.10.1.13:0/2715436964
>>>> <== osd.23 v2:10.10.1.7:6801/4815 22 ==== osd_op_reply(1516
>>>> script.prerequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such
>>>> file or directory)) v8 ==== 246+0+0 (crc 0 0 0) 0x56055ea18b40 con
>>>> 0x56055e53b000
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764
>>>> 0.001000002s s3:put_obj scheduling with throttler client=2 cost=1
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764
>>>> 0.001000002s s3:put_obj op=21RGWPutObj_ObjStore_S3
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700  2 req 728157015944164764
>>>> 0.001000002s s3:put_obj verifying requester
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764
>>>> 0.001000002s s3:put_obj rgw::auth::StrategyRegistry::s3_main_strategy_t:
>>>> trying rgw::auth::s3::AWSAuthStrategy
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764
>>>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy: trying
>>>> rgw::auth::s3::S3AnonymousEngine
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764
>>>> 0.001000002s s3:put_obj rgw::auth::s3::S3AnonymousEngine granted access
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764
>>>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy granted access
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700  2 req 728157015944164764
>>>> 0.001000002s s3:put_obj normalizing buckets and tenants
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764
>>>> 0.001000002s s->object=AUTH_971efa4cb18f42f7a405342072c39c9d/demo
>>>> s->bucket=v1
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700  2 req 728157015944164764
>>>> 0.001000002s s3:put_obj init permissions
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764
>>>> 0.001000002s s3:put_obj get_system_obj_state: rctx=0x7f23221a9000
>>>> obj=default.rgw.meta:root:v1 state=0x56055ea8c520 s->prefetch_data=0
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764
>>>> 0.001000002s s3:put_obj cache get: name=default.rgw.meta+root+v1 : miss
>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700  1 -- 10.10.1.13:0/2715436964
>>>> --> [v2:10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933] --
>>>> osd_op(unknown.0.0:1517 11.b 11:d05f7b30:root::v1:head [call version.read
>>>> in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8
>>>> -- 0x56055eb2cc00 con 0x56055e585000
>>>> 2022-04-18T12:26:27.997+0100 7f230c801700  1 -- 10.10.1.13:0/2715436964
>>>> <== osd.3 v2:10.10.1.3:6802/4933 9 ==== osd_op_reply(1517 v1
>>>> [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory))
>>>> v8 ==== 230+0+0 (crc 0 0 0) 0x56055e39db00 con 0x56055e585000
>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764
>>>> 0.002000004s s3:put_obj cache put: name=default.rgw.meta+root+v1
>>>> info.flags=0x0
>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764
>>>> 0.002000004s s3:put_obj adding default.rgw.meta+root+v1 to cache LRU end
>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764
>>>> 0.002000004s s3:put_obj init_permissions on <NULL> failed, ret=-2002
>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700  1 req 728157015944164764
>>>> 0.002000004s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002
>>>> 2022-04-18T12:26:27.997+0100 7f22dbfa0700  1 -- 10.10.1.13:0/2715436964
>>>> --> [v2:10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817] --
>>>> osd_op(unknown.0.0:1518 12.1f 12:fb11263f:::script.postrequest.:head [call
>>>> version.read in=11b,getxattrs,stat] snapc 0=[]
>>>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2d000 con
>>>> 0x56055e94c800
>>>> 2022-04-18T12:26:27.998+0100 7f230d002700  1 -- 10.10.1.13:0/2715436964
>>>> <== osd.9 v2:10.10.1.8:6804/4817 10 ==== osd_op_reply(1518
>>>> script.postrequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such
>>>> file or directory)) v8 ==== 247+0+0 (crc 0 0 0) 0x56055ea18b40 con
>>>> 0x56055e94c800
>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700  2 req 728157015944164764
>>>> 0.003000006s s3:put_obj op status=0
>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700  2 req 728157015944164764
>>>> 0.003000006s s3:put_obj http status=404
>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700  1 ====== req done
>>>> req=0x7f23221aa620 op status=0 http_status=404 latency=0.003000006s ======
>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700  1 beast: 0x7f23221aa620:
>>>> 10.10.1.13 - anonymous [18/Apr/2022:12:26:27.995 +0100] "PUT
>>>> /v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo HTTP/1.1" 404 214 -
>>>> "openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0
>>>> CPython/3.8.8" - latency=0.003000006s
>>>>
>>>
>>> Could you help please.
>>>
>>> Regards.
>>>
>>
>>
>> --
>>
>> බුද්ධික සංජීව ගොඩාකුරු
>> Buddhika Sanjeewa Godakuru
>>
>> Systems Analyst/Programmer
>> Deputy Webmaster / University of Kelaniya
>>
>> Information and Communication Technology Centre (ICTC)
>> University of Kelaniya, Sri Lanka,
>> Kelaniya,
>> Sri Lanka.
>>
>> Mobile : (+94) 071 5696981
>> Office : (+94) 011 2903420 / 2903424
>>
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> University of Kelaniya Sri Lanka, accepts no liability for the content of
>> this email, or for the consequences of any actions taken on the basis of
>> the information provided, unless that information is subsequently confirmed
>> in writing. If you are not the intended recipient, this email and/or any
>> information it contains should not be copied, disclosed, retained or used
>> by you or any other party and the email and all its contents should be
>> promptly deleted fully from our system and the sender informed.
>>
>> E-mail transmission cannot be guaranteed to be secure or error-free as
>> information could be intercepted, corrupted, lost, destroyed, arrive late
>> or incomplete.
>>
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>
>
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220419/d3243587/attachment-0001.htm>


More information about the openstack-discuss mailing list