<div dir="ltr"><div>Hi,</div><div>I am having trouble configuring Openstack to use Ceph RGW as the Object store backend for Swift and S3.</div><div><br></div><div></div><div>My setup is an HCI, I have 3 controllers which are also my ceph mgrs, mons and rgws and 9 compte/storage servers (osds).</div><div>Xena is deployed with Ceph Pacific.</div><div><br></div><div>Ceph public network is a private network on vlan10 with <a href="http://10.10.1.0/24">10.10.1.0/24</a> as a subnet.</div><div><br></div><div>Here is a snippet from my globals.yml :</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">---<br>kolla_base_distro: "centos"<br>kolla_install_type: "source"<br>openstack_release: "xena"<br>kolla_internal_vip_address: "10.10.3.1"<br>kolla_internal_fqdn: "<a href="http://dashint.cloud.example.com">dashint.cloud.example.com</a>"<br>kolla_external_vip_address: "x.x.x.x"<br>kolla_external_fqdn: "<a href="http://dash.cloud.example.com">dash.cloud.example.com</a>

"<br>docker_registry: <a href="http://192.168.1.16:4000">192.168.1.16:4000</a><br>network_interface: "bond0"<br>kolla_external_vip_interface: "bond1"<br>api_interface: "bond1.30"<br><b>storage_interface: "bond1.10"       <---------------- VLAN10 (public ceph network)</b><br>tunnel_interface: "bond1.40"<br>dns_interface: "bond1"<br>octavia_network_interface: "bond1.301"<br>neutron_external_interface: "bond2"<br>neutron_plugin_agent: "openvswitch"<br>keepalived_virtual_router_id: "51"<br>kolla_enable_tls_internal: "yes"<br>kolla_enable_tls_external: "yes"<br>kolla_certificates_dir: "{{ node_config }}/certificates"<br>kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem"<br>kolla_internal_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy-internal.pem"<br>kolla_admin_openrc_cacert: "{{ kolla_certificates_dir }}/ca.pem"<br>kolla_copy_ca_into_containers: "yes"<br>kolla_enable_tls_backend: "yes"<br>kolla_verify_tls_backend: "no"<br>kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem"<br>kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem"<br>enable_openstack_core: "yes"<br>enable_hacluster: "yes"<br>enable_haproxy: "yes"<br>enable_aodh: "yes"<br>enable_barbican: "yes"<br>enable_ceilometer: "yes"<br>enable_central_logging: "yes"<br><b>enable_ceph_rgw: "yes"<br>enable_ceph_rgw_loadbalancer: "{{ enable_ceph_rgw | bool }}"</b><br>enable_cinder: "yes"<br>enable_cinder_backup: "yes"<br>enable_collectd: "yes"<br>enable_designate: "yes"<br>enable_elasticsearch_curator: "yes"<br>enable_freezer: "no"<br>enable_gnocchi: "yes"<br>enable_gnocchi_statsd: "yes"<br>enable_magnum: "yes"<br>enable_manila: "yes"<br>enable_manila_backend_cephfs_native: "yes"<br>enable_mariabackup: "yes"<br>enable_masakari: "yes"<br>enable_neutron_vpnaas: "yes"<br>enable_neutron_qos: "yes"<br>enable_neutron_agent_ha: "yes"<br>enable_neutron_provider_networks: "yes"<br>enable_neutron_segments: "yes"<br>enable_octavia: "yes"<br>enable_trove: "yes"<br>external_ceph_cephx_enabled: "yes"<br>ceph_glance_keyring: "ceph.client.glance.keyring"<br>ceph_glance_user: "glance"<br>ceph_glance_pool_name: "images"<br>ceph_cinder_keyring: "ceph.client.cinder.keyring"<br>ceph_cinder_user: "cinder"<br>ceph_cinder_pool_name: "volumes"<br>ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring"<br>ceph_cinder_backup_user: "cinder-backup"<br>ceph_cinder_backup_pool_name: "backups"<br>ceph_nova_keyring: "{{ ceph_cinder_keyring }}"<br>ceph_nova_user: "cinder"<br>ceph_nova_pool_name: "vms"<br>ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring"<br>ceph_gnocchi_user: "gnocchi"<br>ceph_gnocchi_pool_name: "metrics"<br>ceph_manila_keyring: "ceph.client.manila.keyring"<br>ceph_manila_user: "manila"<br>glance_backend_ceph: "yes"<br>glance_backend_file: "no"<br>gnocchi_backend_storage: "ceph"<br>cinder_backend_ceph: "yes"<br>cinder_backup_driver: "ceph"<br>cloudkitty_collector_backend: "gnocchi"<br>designate_ns_record: "<a href="http://cloud.example.com">cloud.example.com</a>

"<br>nova_backend_ceph: "yes"<br>nova_compute_virt_type: "kvm"<br>octavia_auto_configure: yes<br>octavia_amp_flavor:<br>  name: "amphora"<br>  is_public: no<br>  vcpus: 1<br>  ram: 1024<br>  disk: 5<br>octavia_amp_network:<br>  name: lb-mgmt-net<br>  provider_network_type: vlan<br>  provider_segmentation_id: 301<br>  provider_physical_network: physnet1<br>  external: false<br>  shared: false<br>  subnet:<br>    name: lb-mgmt-subnet<br>    cidr: "<a href="http://10.7.0.0/16">10.7.0.0/16</a>"<br>    allocation_pool_start: "10.7.0.50"<br>    allocation_pool_end: "10.7.255.200"<br>    no_gateway_ip: yes<br>    enable_dhcp: yes<br>    mtu: 9000<br>octavia_amp_network_cidr: <a href="http://10.10.7.0/24">10.10.7.0/24</a><br>octavia_amp_image_tag: "amphora"<br>octavia_certs_country: XZ<br>octavia_certs_state: Gotham<br>octavia_certs_organization: WAYNE<br>octavia_certs_organizational_unit: IT<br>horizon_keystone_multidomain: true<br>elasticsearch_curator_dry_run: "no"<br>enable_cluster_user_trust: true<br><b>ceph_rgw_hosts:<br>        - host: controllera<br>          ip: 10.10.1.5<br>          port: 8080<br>        - host: controllerb<br>          ip: 10.10.1.9<br>          port: 8080<br>        - host: controllerc<br>          ip: 10.10.1.13<br>          port: 8080<br>ceph_rgw_swift_account_in_url: true<br>ceph_rgw_swift_compatibility: true</b></blockquote><div><br></div><div><br></div><div>And Here is my ceph all.yml file</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>---<br>dummy:<br>ceph_release_num: 16<br>cluster: ceph<br>configure_firewall: False<br><b>monitor_interface: bond1.10</b><br>monitor_address_block: <a href="http://10.10.1.0/24">10.10.1.0/24</a><br>is_hci: true<br>hci_safety_factor: 0.2<br>osd_memory_target: 4294967296<br><b>public_network: <a href="http://10.10.1.0/24">10.10.1.0/24</a></b><br>cluster_network: <a href="http://10.10.2.0/24">10.10.2.0/24</a><br><b>radosgw_interface: "{{ monitor_interface }}"</b><br><b>radosgw_address_block: <a href="http://10.10.1.0/24">10.10.1.0/24</a></b><br>nfs_file_gw: true<br>nfs_obj_gw: true<br>ceph_docker_image: "ceph/daemon"<br>ceph_docker_image_tag: latest-pacific<br>ceph_docker_registry: <a href="http://192.168.1.16:4000">192.168.1.16:4000</a><br>containerized_deployment: True<br>openstack_config: true<br>openstack_glance_pool:<br>  name: "images"<br>  pg_autoscale_mode: False<br>  application: "rbd"<br>  pg_num: 128<br>  pgp_num: 128<br>  target_size_ratio: 5.00<br>  rule_name: "SSD"<br>openstack_cinder_pool:<br>  name: "volumes"<br>  pg_autoscale_mode: False<br>  application: "rbd"<br>  pg_num: 1024<br>  pgp_num: 1024<br>  target_size_ratio: 42.80<br>  rule_name: "SSD"<br>openstack_nova_pool:<br>  name: "vms"<br>  pg_autoscale_mode: False<br>  application: "rbd"<br>  pg_num: 256<br>  pgp_num: 256<br>  target_size_ratio: 10.00<br>  rule_name: "SSD"<br>openstack_cinder_backup_pool:<br>  name: "backups"<br>  pg_autoscale_mode: False<br>  application: "rbd"<br>  pg_num: 512<br>  pgp_num: 512<br>  target_size_ratio: 18.00<br>  rule_name: "SSD"<br>openstack_gnocchi_pool:<br>  name: "metrics"<br>  pg_autoscale_mode: False<br>  application: "rbd"<br>  pg_num: 32<br>  pgp_num: 32<br>  target_size_ratio: 0.10<br>  rule_name: "SSD"<br>openstack_cephfs_data_pool:<br>  name: "cephfs_data"<br>  pg_autoscale_mode: False<br>  application: "cephfs"<br>  pg_num: 256<br>  pgp_num: 256<br>  target_size_ratio: 10.00<br>  rule_name: "SSD"<br>openstack_cephfs_metadata_pool:<br>  name: "cephfs_metadata"<br>  pg_autoscale_mode: False<br>  application: "cephfs"<br>  pg_num: 32<br>  pgp_num: 32<br>  target_size_ratio: 0.10<br>  rule_name: "SSD"<br>openstack_pools:<br>  - "{{ openstack_glance_pool }}"<br>  - "{{ openstack_cinder_pool }}"<br>  - "{{ openstack_nova_pool }}"<br>  - "{{ openstack_cinder_backup_pool }}"<br>  - "{{ openstack_gnocchi_pool }}"<br>  - "{{ openstack_cephfs_data_pool }}"<br>  - "{{ openstack_cephfs_metadata_pool }}"<br>openstack_keys:<br>  - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd pool={{ <a href="http://openstack_cinder_pool.name">openstack_cinder_pool.name</a> }}, profile rbd pool={{ <a href="http://openstack_glance_pool.name">openstack_glance_pool.name</a> }}"}, mode: "0600" }<br>  - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd pool={{ <a href="http://openstack_cinder_pool.name">openstack_cinder_pool.name</a> }}, profile rbd pool={{ <a href="http://openstack_nova_pool.name">openstack_nova_pool.name</a> }}, profile rbd pool={{ <a href="http://openstack_glance_pool.name">openstack_glance_pool.name</a> }}"}, mode: "0600" }<br>  - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: "profile rbd pool={{ <a href="http://openstack_cinder_backup_pool.name">openstack_cinder_backup_pool.name</a> }}"}, mode: "0600" }<br>  - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile rbd pool={{ <a href="http://openstack_gnocchi_pool.name">openstack_gnocchi_pool.name</a> }}"}, mode: "0600", }<br>  - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile rbd pool={{ <a href="http://openstack_glance_pool.name">openstack_glance_pool.name</a> }}, profile rbd pool={{ <a href="http://openstack_nova_pool.name">openstack_nova_pool.name</a> }}, profile rbd pool={{ <a href="http://openstack_cinder_pool.name">openstack_cinder_pool.name</a> }}, profile rbd pool={{ <a href="http://openstack_cinder_backup_pool.name">openstack_cinder_backup_pool.name</a> }}"}, mode: "0600" }<br>dashboard_enabled: True<br>dashboard_protocol: https<br>dashboard_port: 8443<br>dashboard_network: "<a href="http://192.168.1.0/24">192.168.1.0/24</a>"<br>dashboard_admin_user: admin<br>dashboard_admin_user_ro: true<br>dashboard_admin_password: ***********<br>dashboard_crt: '/home/deployer/work/site-central/chaininv.crt'<br>dashboard_key: '/home/deployer/work/site-central/cloud_example.com.priv'<br>dashboard_grafana_api_no_ssl_verify: true<br>dashboard_rgw_api_user_id: admin<br>dashboard_rgw_api_no_ssl_verify: true<br>dashboard_frontend_vip: '192.168.1.5'<br>node_exporter_container_image: "<a href="http://192.168.1.16:4000/prom/node-exporter:v0.17.0">192.168.1.16:4000/prom/node-exporter:v0.17.0</a>"<br>grafana_admin_user: admin<br>grafana_admin_password: *********<br>grafana_crt: '/home/deployer/work/site-central/chaininv.crt'<br>grafana_key: '/home/deployer/work/site-central/cloud_example.com.priv'<br>grafana_server_fqdn: '<a href="http://grafanasrv.cloud.example.com">grafanasrv.cloud.example.com</a>'<br>grafana_container_image: "<a href="http://192.168.1.16:4000/grafana/grafana:6.7.4">192.168.1.16:4000/grafana/grafana:6.7.4</a>"<br>grafana_dashboard_version: pacific<br>prometheus_container_image: "<a href="http://192.168.1.16:4000/prom/prometheus:v2.7.2">192.168.1.16:4000/prom/prometheus:v2.7.2</a>"<br>alertmanager_container_image: "<a href="http://192.168.1.16:4000/prom/alertmanager:v0.16.2">192.168.1.16:4000/prom/alertmanager:v0.16.2</a>"</div></blockquote><div><br></div><div>And my rgws.yml</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>---<br>dummy:<br>copy_admin_key: true<br>rgw_create_pools:<br>  "{{ rgw_zone }}.rgw.buckets.data":<br>    pg_num: 256<br>    pgp_num: 256<br>    size: 3<br>    type: replicated<br>    pg_autoscale_mode: False<br>    rule_id: 1<br>  "{{ rgw_zone }}.rgw.buckets.index":<br>    pg_num: 64<br>    pgp_num: 64<br>    size: 3<br>    type: replicated<br>    pg_autoscale_mode: False<br>    rule_id: 1<br>  "{{ rgw_zone }}.rgw.meta":<br>    pg_num: 32<br>    pgp_num: 32<br>    size: 3<br>    type: replicated<br>    pg_autoscale_mode: False<br>    rule_id: 1<br>  "{{ rgw_zone }}.rgw.log":<br>    pg_num: 32<br>    pgp_num: 32<br>    size: 3<br>    type: replicated<br>    pg_autoscale_mode: False<br>    rule_id: 1<br>  "{{ rgw_zone }}.rgw.control":<br>    pg_num: 32<br>    pgp_num: 32<br>    size: 3<br>    type: replicated<br>    pg_autoscale_mode: False<br>    rule_id: 1</div></blockquote><div><br></div><div>The ceph_rgw user was created by kolla <br></div><div>(xenavenv) [deployer@rscdeployer ~]$ openstack user list | grep ceph<br>| 3262aa7e03ab49c8a5710dfe3b16a136 | ceph_rgw</div><div><br></div><div>This is my ceph.conf from one of my controllers :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>[root@controllera ~]# cat /etc/ceph/ceph.conf<br>[client.rgw.controllera.rgw0]<br>host = controllera<br>rgw_keystone_url = <a href="https://dash.cloud.example.com:5000">https://dash.cloud.example.com:5000</a><br>##Authentication using username, password and tenant. Preferred.<br>rgw_keystone_verify_ssl = false<br>rgw_keystone_api_version = 3<br>rgw_keystone_admin_user = ceph_rgw<br>rgw_keystone_admin_password = cos2Jcnpnw9BhGwvPm**************************<br>rgw_keystone_admin_domain = Default<br>rgw_keystone_admin_project = service<br>rgw_s3_auth_use_keystone = true<br>rgw_keystone_accepted_roles = admin<br>rgw_keystone_implicit_tenants = true<br>rgw_swift_account_in_url = true<br>keyring = /var/lib/ceph/radosgw/ceph-rgw.controllera.rgw0/keyring<br>log file = /var/log/ceph/ceph-rgw-controllera.rgw0.log<br>rgw frontends = beast endpoint=<a href="http://10.10.1.5:8080">10.10.1.5:8080</a><br>rgw thread pool size = 512<br>#For Debug<br>debug ms = 1<br>debug rgw = 20<br><br><br># Please do not change this file directly since it is managed by Ansible and will be overwritten<br>[global]<br>cluster network = <a href="http://10.10.2.0/24">10.10.2.0/24</a><br>fsid = da094354-6ade-415a-a424-************<br>mon host = [v2:<a href="http://10.10.1.5:3300">10.10.1.5:3300</a>,v1:<a href="http://10.10.1.5:6789">10.10.1.5:6789</a>],[v2:<a href="http://10.10.1.9:3300">10.10.1.9:3300</a>,v1:<a href="http://10.10.1.9:6789">10.10.1.9:6789</a>],[v2:<a href="http://10.10.1.13:3300">10.10.1.13:3300</a>,v1:<a href="http://10.10.1.13:6789">10.10.1.13:6789</a>]<br>mon initial members = controllera,controllerb,controllerc<br>osd pool default crush rule = 1<br><b>public network = <a href="http://10.10.1.0/24">10.10.1.0/24</a></b></div></blockquote><div><br></div><div><br></div><div>Here are my swift endpoints <br></div><div>(xenavenv) [deployer@rscdeployer ~]$ openstack endpoint list | grep swift<br>| 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift        | object-store    | True    | admin     | <a href="https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s">https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s</a> |<br>| b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift        | object-store    | True    | internal  | <a href="https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s">https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s</a> |<br>| f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift        | object-store    | True    | public    | <a href="https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s">https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s</a>    |</div><div><br></div><div>When I connect to Horizon -> Project -> Object Store -> Containers I get theses errors :</div><ul><li>Unable to get the swift container listing</li><li>Unable to fetch the policy details.</li></ul><div>I cannot create a new container from the WebUI, the Storage policy parameter is empty.</div><div>If I try to create a new container from the CLI, I get this :</div><div> (xenavenv) [deployer@rscdeployer ~]$ source cephrgw-openrc.sh<br>(xenavenv) [deployer@rscdeployer ~]$ openstack container create demo -v<br>START with options: container create demo -v<br>command: container create -> openstackclient.object.v1.container.CreateContainer (auth=True)<br>Using auth plugin: password<br>Not Found (HTTP 404)<br>END return value: 1</div><div><br></div><div><br></div><div>This is the log from RGW service when I execute the above command :</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 CONTENT_LENGTH=0<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT=*/*<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT_ENCODING=gzip, deflate<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_HOST=<a href="http://dashint.cloud.example.com:6780">dashint.cloud.example.com:6780</a><br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_USER_AGENT=openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0 CPython/3.8.8<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_VERSION=1.1<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_X_AUTH_TOKEN=gAAAAABiXUrjDFNzXx03mt1lbpUiCqNND1HACspSfg6h_TMxKYND5Hb9BO3FxH0a7CYoBXgRJywGszlK8cl-7zbUNRjHmxgIzmyh-CrWyGv793ZLOAmT_XShcrIKThjIIH3gTxYoX1TXwOKbsvMuZnI5EKKsol2y2MhcqPLeLGc28_AwoOr_b80<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_X_FORWARDED_FOR=10.10.3.16<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_X_FORWARDED_PROTO=https<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REMOTE_ADDR=10.10.1.13<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_METHOD=PUT<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SCRIPT_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SERVER_PORT=8080<br>2022-04-18T12:26:27.995+0100 7f22e07a9700  1 ====== starting new request req=0x7f23221aa620 =====<br>2022-04-18T12:26:27.995+0100 7f22e07a9700  2 req 728157015944164764 0.000000000s initializing for trans_id = tx000000a1aeef2b40f759c-00625d4ae3-4b389-default<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 0.000000000s rgw api priority: s3=8 s3website=7<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 0.000000000s host=<a href="http://dashint.cloud.example.com">dashint.cloud.example.com</a><br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 0.000000000s subdomain= domain= in_hosted_domain=0 in_hosted_domain_s3website=0<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 0.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0 in_hosted_domain_s3website=0 s->info.domain= s->info.request_uri=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 0.000000000s get_handler handler=22RGWHandler_REST_Obj_S3<br>2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 0.000000000s handler=22RGWHandler_REST_Obj_S3<br>2022-04-18T12:26:27.995+0100 7f22e07a9700  2 req 728157015944164764 0.000000000s getting op 1<br>2022-04-18T12:26:27.995+0100 7f22e07a9700  1 -- <a href="http://10.10.1.13:0/2715436964">10.10.1.13:0/2715436964</a> --> [v2:<a href="http://10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815">10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815</a>] -- osd_op(unknown.0.0:1516 12.3 12:c14cb721:::script.prerequest.:head [call version.read in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2c400 con 0x56055e53b000<br>2022-04-18T12:26:27.996+0100 7f230d002700  1 -- <a href="http://10.10.1.13:0/2715436964">10.10.1.13:0/2715436964</a> <== osd.23 v2:<a href="http://10.10.1.7:6801/4815">10.10.1.7:6801/4815</a> 22 ==== osd_op_reply(1516 script.prerequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 246+0+0 (crc 0 0 0) 0x56055ea18b40 con 0x56055e53b000<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 0.001000002s s3:put_obj scheduling with throttler client=2 cost=1<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 0.001000002s s3:put_obj op=21RGWPutObj_ObjStore_S3<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700  2 req 728157015944164764 0.001000002s s3:put_obj verifying requester<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 0.001000002s s3:put_obj rgw::auth::StrategyRegistry::s3_main_strategy_t: trying rgw::auth::s3::AWSAuthStrategy<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::S3AnonymousEngine<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 0.001000002s s3:put_obj rgw::auth::s3::S3AnonymousEngine granted access<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy granted access<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700  2 req 728157015944164764 0.001000002s s3:put_obj normalizing buckets and tenants<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 0.001000002s s->object=AUTH_971efa4cb18f42f7a405342072c39c9d/demo s->bucket=v1<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700  2 req 728157015944164764 0.001000002s s3:put_obj init permissions<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 0.001000002s s3:put_obj get_system_obj_state: rctx=0x7f23221a9000 obj=default.rgw.meta:root:v1 state=0x56055ea8c520 s->prefetch_data=0<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 0.001000002s s3:put_obj cache get: name=default.rgw.meta+root+v1 : miss<br>2022-04-18T12:26:27.996+0100 7f22ddfa4700  1 -- <a href="http://10.10.1.13:0/2715436964">10.10.1.13:0/2715436964</a> --> [v2:<a href="http://10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933">10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933</a>] -- osd_op(unknown.0.0:1517 11.b 11:d05f7b30:root::v1:head [call version.read in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2cc00 con 0x56055e585000<br>2022-04-18T12:26:27.997+0100 7f230c801700  1 -- <a href="http://10.10.1.13:0/2715436964">10.10.1.13:0/2715436964</a> <== osd.3 v2:<a href="http://10.10.1.3:6802/4933">10.10.1.3:6802/4933</a> 9 ==== osd_op_reply(1517 v1 [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 230+0+0 (crc 0 0 0) 0x56055e39db00 con 0x56055e585000<br>2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 0.002000004s s3:put_obj cache put: name=default.rgw.meta+root+v1 info.flags=0x0<br>2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 0.002000004s s3:put_obj adding default.rgw.meta+root+v1 to cache LRU end<br>2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 0.002000004s s3:put_obj init_permissions on <NULL> failed, ret=-2002<br>2022-04-18T12:26:27.997+0100 7f22dd7a3700  1 req 728157015944164764 0.002000004s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002<br>2022-04-18T12:26:27.997+0100 7f22dbfa0700  1 -- <a href="http://10.10.1.13:0/2715436964">10.10.1.13:0/2715436964</a> --> [v2:<a href="http://10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817">10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817</a>] -- osd_op(unknown.0.0:1518 12.1f 12:fb11263f:::script.postrequest.:head [call version.read in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2d000 con 0x56055e94c800<br>2022-04-18T12:26:27.998+0100 7f230d002700  1 -- <a href="http://10.10.1.13:0/2715436964">10.10.1.13:0/2715436964</a> <== osd.9 v2:<a href="http://10.10.1.8:6804/4817">10.10.1.8:6804/4817</a> 10 ==== osd_op_reply(1518 script.postrequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 247+0+0 (crc 0 0 0) 0x56055ea18b40 con 0x56055e94c800<br>2022-04-18T12:26:27.998+0100 7f22d8f9a700  2 req 728157015944164764 0.003000006s s3:put_obj op status=0<br>2022-04-18T12:26:27.998+0100 7f22d8f9a700  2 req 728157015944164764 0.003000006s s3:put_obj http status=404<br>2022-04-18T12:26:27.998+0100 7f22d8f9a700  1 ====== req done req=0x7f23221aa620 op status=0 http_status=404 latency=0.003000006s ======<br>2022-04-18T12:26:27.998+0100 7f22d8f9a700  1 beast: 0x7f23221aa620: 10.10.1.13 - anonymous [18/Apr/2022:12:26:27.995 +0100] "PUT /v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo HTTP/1.1" 404 214 - "openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0 CPython/3.8.8" - latency=0.003000006s</div></blockquote><div><br></div><div>Could you help please.<br></div><div><br></div><div>Regards.<br></div></div>