cron triggers execution fails with cinder.volume_snapshots_create
Dear All We are using Mistral with Openstack Rocky. (with federated users) We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success. But we hit an issue with cinder.backups_create . This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger: 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied See details below. Cheers Francois 2019-09-17 10:46:02.436 8 INFO cinder.backup.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Create backup started, backup: 901e1781-02ad-46d5-8ddf-e5410670cf9f volume: c0022411-59a4-4c7c-9474-c7ea8ccc7691. 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] GET http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d... 2019-09-17 10:46:02.764 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d... returned with HTTP 200 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 18532 "-" "Go-http-client/1.1" 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 "-" "Go-http-client/1.1" 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu... 2019-09-17 10:46:03.150 22 INFO cinder.volume.api [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all volumes completed successfully. 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu... returned with HTTP 200 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s... 2019-09-17 10:46:03.172 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s... returned with HTTP 200 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap... 2019-09-17 10:46:03.197 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap... returned with HTTP 200 2019-09-17 10:46:03.197 19 INFO cinder.volume.api [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all snapshots completed successfully. 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Initialize volume connection completed successfully. 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Terminate volume connection completed successfully. 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Remove volume export completed successfully. 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = self._create_container(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server query_string=query_string) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise ClientException.from_response(resp, 'Container PUT failed', body) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self._update_backup_error(backup, six.text_type(err)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.conn.put_container(container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.force_reraise() 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.put_container(backup.container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server service_token=self.service_token, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server tpool.Proxy(device_path)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = self._run_backup(context, backup, volume) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server volume_size_bytes) = self._prepare_backup(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 226, in _create_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 327, in _prepare_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 535, in backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py", line 315, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 414, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 425, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 502, in _run_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1061, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1722, in _retry 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1808, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Exception during message handling: ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback (most recent call last): -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Hello François, Given your error, are you sure your cron task load the right config with the right authorized user or something related? Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer < francois.scheurer@everyware.ch> a écrit :
Dear All
We are using Mistral with Openstack Rocky. (with federated users) We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success.
But we hit an issue with cinder.backups_create .
This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger:
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed:
http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied
See details below.
Cheers
Francois
2019-09-17 10:46:02.436 8 INFO cinder.backup.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Create backup started, backup: 901e1781-02ad-46d5-8ddf-e5410670cf9f volume: c0022411-59a4-4c7c-9474-c7ea8ccc7691. 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] GET
http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d... 2019-09-17 <http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f2019-09-17> 10:46:02.764 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a]
http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d... returned with HTTP 200 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 18532 "-" "Go-http-client/1.1" 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 "-" "Go-http-client/1.1" 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET
http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu... 2019-09-17 <http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes2019-09-17> 10:46:03.150 22 INFO cinder.volume.api [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all volumes completed successfully. 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default]
http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu... returned with HTTP 200 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET
http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s... 2019-09-17 <http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services2019-09-17> 10:46:03.172 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default]
http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s... returned with HTTP 200 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET
http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap... 2019-09-17 <http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots2019-09-17> 10:46:03.197 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default]
http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap... returned with HTTP 200 2019-09-17 10:46:03.197 19 INFO cinder.volume.api [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all snapshots completed successfully. 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Initialize volume connection completed successfully. 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Terminate volume connection completed successfully. 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Remove volume export completed successfully. 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = self._create_container(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server query_string=query_string) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise ClientException.from_response(resp, 'Container PUT failed', body) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self._update_backup_error(backup, six.text_type(err)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.conn.put_container(container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.force_reraise() 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.put_container(backup.container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server service_token=self.service_token, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server tpool.Proxy(device_path)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = self._run_backup(context, backup, volume) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server volume_size_bytes) = self._prepare_backup(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py",
line 226, in _create_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py",
line 327, in _prepare_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py",
line 535, in backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py",
line 315, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py",
line 414, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py",
line 425, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py",
line 502, in _run_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 194, in _do_dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 265, in dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
line 163, in _process_incoming 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 196, in force_reraise 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 220, in __exit__ 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py",
line 159, in wrapper 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py",
line 1061, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py",
line 1722, in _retry 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py",
line 1808, in put_container
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Exception during message handling: ClientException: Container PUT failed:
http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed:
http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
Hi Herve Thank you for your reply. I am using the same input & params as when executing the workflow directly from horizon (successfully): { "incremental": "false", "force": "true", "name": "fsc-create-vol-backup", "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691" } { "namespace": "", "env": {}, "task_name": "create_vol_backup_task" } Maybe I need some additional params when executing via cron? I will try specfying the objectstore container explicitly. Best Regards Francois On 9/19/19 1:18 PM, Herve Beraud wrote:
Hello François,
Given your error, are you sure your cron task load the right config with the right authorized user or something related?
Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer <francois.scheurer@everyware.ch <mailto:francois.scheurer@everyware.ch>> a écrit :
Dear All
We are using Mistral with Openstack Rocky. (with federated users) We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success.
But we hit an issue with cinder.backups_create .
This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger:
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f...
401 Unauthorized AccessDenied
See details below.
Cheers
Francois
2019-09-17 10:46:02.436 8 INFO cinder.backup.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Create backup started, backup: 901e1781-02ad-46d5-8ddf-e5410670cf9f volume: c0022411-59a4-4c7c-9474-c7ea8ccc7691. 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] GET http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d... 2019-09-17 <http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f2019-09-17> 10:46:02.764 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d...
returned with HTTP 200 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 18532 "-" "Go-http-client/1.1" 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 "-" "Go-http-client/1.1" 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu... 2019-09-17 <http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes2019-09-17> 10:46:03.150 22 INFO cinder.volume.api [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all volumes completed successfully. 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu...
returned with HTTP 200 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s... 2019-09-17 <http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services2019-09-17> 10:46:03.172 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s...
returned with HTTP 200 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap... 2019-09-17 <http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots2019-09-17> 10:46:03.197 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap...
returned with HTTP 200 2019-09-17 10:46:03.197 19 INFO cinder.volume.api [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all snapshots completed successfully. 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Initialize volume connection completed successfully. 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Terminate volume connection completed successfully. 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Remove volume export completed successfully. 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = self._create_container(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server query_string=query_string) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise ClientException.from_response(resp, 'Container PUT failed', body) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self._update_backup_error(backup, six.text_type(err)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.conn.put_container(container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.force_reraise() 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.put_container(backup.container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server service_token=self.service_token, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server tpool.Proxy(device_path)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = self._run_backup(context, backup, volume) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server volume_size_bytes) = self._prepare_backup(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py",
line 226, in _create_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py",
line 327, in _prepare_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py",
line 535, in backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py",
line 315, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py",
line 414, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py",
line 425, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py",
line 502, in _run_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 194, in _do_dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 265, in dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
line 163, in _process_incoming 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 196, in force_reraise 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 220, in __exit__ 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py",
line 159, in wrapper 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py",
line 1061, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py",
line 1722, in _retry 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py",
line 1808, in put_container
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Exception during message handling: ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f...
401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f...
401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch <mailto:francois.scheurer@everyware.ch> web: http://www.everyware.ch
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE-----
wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
-- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Hello Hervé I tried again, this time defining explictitly all parameters, including action_region and snapshot_id. The results were same as previously: it works when executing the workflow directly but fails with a cron trigger. Or to be more precise, the cron trigger execution "succeeds" but the resulting volume backup fails : (.venv) ewfsc@ewos1-kolla1-stage:~$ openstack volume backup show -f json abe96cb1-a5e1-4035-87dd-b4292101a921 { "status": "error", "object_count": 0, "fail_reason": "Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied", "description": null, "name": "fsc-vol-1-img-vol-bak", "availability_zone": "ch-zh1-az1", "created_at": "2019-09-19T13:15:02.000000", "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691", "updated_at": "2019-09-19T13:15:04.000000", "data_timestamp": "2019-09-19T12:38:02.000000", "has_dependent_backups": false, "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114", "container": "volumebackups", "size": 1, "id": "abe96cb1-a5e1-4035-87dd-b4292101a921", "is_incremental": false } Best Regards Francois Details: Workflow --- version: "2.0" create_vol_backup: type: direct input: - volume_id - container - name - incremental - force - action_region - snapshot_id tasks: create_vol_backup: action: cinder.backups_create volume_id=<% $.volume_id %> name=<% $.name %> container=<% $.container %> incremental=<% $.incremental %> force=<% $.force %> action_region=<% $.action_region%> snapshot_id=<% $.snapshot_id %> publish: backup_id: <% task(create_vol_backup).result %> create_state: SUCCESS publish-on-error: create_state: ERROR Input { "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691", "container": "volumebackups", "name": "fsc-vol-1-img-vol-bak", "incremental": "false", "force": "true", "action_region": "ch-zh1", "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114" } Params { "namespace": "", "env": {}, "task_name": "create_vol_backup_task" } On 9/19/19 2:28 PM, Francois Scheurer wrote:
Hi Herve
Thank you for your reply.
I am using the same input & params as when executing the workflow directly from horizon (successfully):
{ "incremental": "false", "force": "true", "name": "fsc-create-vol-backup", "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691" }
{ "namespace": "", "env": {}, "task_name": "create_vol_backup_task" }
Maybe I need some additional params when executing via cron?
I will try specfying the objectstore container explicitly.
Best Regards
Francois
On 9/19/19 1:18 PM, Herve Beraud wrote:
Hello François,
Given your error, are you sure your cron task load the right config with the right authorized user or something related?
Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer <francois.scheurer@everyware.ch <mailto:francois.scheurer@everyware.ch>> a écrit :
Dear All
We are using Mistral with Openstack Rocky. (with federated users) We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success.
But we hit an issue with cinder.backups_create .
This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger:
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f...
401 Unauthorized AccessDenied
See details below.
Cheers
Francois
-- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Thanks François for your reply, Have you seen the original authentication error during the running Le jeu. 19 sept. 2019 à 15:22, Francois Scheurer < francois.scheurer@everyware.ch> a écrit :
Hello Hervé
I tried again, this time defining explictitly all parameters, including action_region and snapshot_id.
The results were same as previously: it works when executing the workflow directly but fails with a cron trigger.
Or to be more precise, the cron trigger execution "succeeds" but the resulting volume backup fails :
Thanks François for your reply, Have you seen the original authentication error during this execution? If not then I guess you missed some params during your first tries which introduced the authentication issue. I guess then that the volume backup fails is another issue, not related to the first authentication issue...
(.venv) ewfsc@ewos1-kolla1-stage:~$ openstack volume backup show -f json abe96cb1-a5e1-4035-87dd-b4292101a921 { "status": "error", "object_count": 0, "fail_reason": "Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied", "description": null, "name": "fsc-vol-1-img-vol-bak", "availability_zone": "ch-zh1-az1", "created_at": "2019-09-19T13:15:02.000000", "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691", "updated_at": "2019-09-19T13:15:04.000000", "data_timestamp": "2019-09-19T12:38:02.000000", "has_dependent_backups": false, "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114", "container": "volumebackups", "size": 1, "id": "abe96cb1-a5e1-4035-87dd-b4292101a921", "is_incremental": false }
Best Regards
Francois
Details:
Workflow --- version: "2.0" create_vol_backup: type: direct input: - volume_id - container - name - incremental - force - action_region - snapshot_id
tasks: create_vol_backup: action: cinder.backups_create volume_id=<% $.volume_id %> name=<% $.name %> container=<% $.container %> incremental=<% $.incremental %> force=<% $.force %> action_region=<% $.action_region%> snapshot_id=<% $.snapshot_id %> publish: backup_id: <% task(create_vol_backup).result %> create_state: SUCCESS publish-on-error: create_state: ERROR
Input { "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691", "container": "volumebackups", "name": "fsc-vol-1-img-vol-bak", "incremental": "false", "force": "true", "action_region": "ch-zh1", "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114" }
Params { "namespace": "", "env": {}, "task_name": "create_vol_backup_task" }
On 9/19/19 2:28 PM, Francois Scheurer wrote:
Hi Herve
Thank you for your reply.
I am using the same input & params as when executing the workflow directly from horizon (successfully):
{ "incremental": "false", "force": "true", "name": "fsc-create-vol-backup", "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691" }
{ "namespace": "", "env": {}, "task_name": "create_vol_backup_task" }
Maybe I need some additional params when executing via cron?
I will try specfying the objectstore container explicitly.
Best Regards
Francois
On 9/19/19 1:18 PM, Herve Beraud wrote:
Hello François,
Given your error, are you sure your cron task load the right config with the right authorized user or something related?
Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer < francois.scheurer@everyware.ch> a écrit :
Dear All
We are using Mistral with Openstack Rocky. (with federated users) We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success.
But we hit an issue with cinder.backups_create .
This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger:
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed:
http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied
See details below.
Cheers
Francois
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
On 19/09, Francois Scheurer wrote:
Dear All
We are using Mistral with Openstack Rocky. (with federated users) We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success.
But we hit an issue with cinder.backups_create .
This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger:
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied
See details below.
Hi, This makes no sense, the Swift connection credentials don't depend on the OpenStack user calling the service, they are internal to the Backup service. If after this error you can still create a backup manually, then the backup service works fine and the swiftclient as well (since we rely on it not failing the create call on for an container). I would start by checking on the Swift logs to see why this request was rejected and the manual one isn't. Cheers, Gorka.
Cheers
Francois
2019-09-17 10:46:02.436 8 INFO cinder.backup.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Create backup started, backup: 901e1781-02ad-46d5-8ddf-e5410670cf9f volume: c0022411-59a4-4c7c-9474-c7ea8ccc7691. 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] GET http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d... 2019-09-17 10:46:02.764 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9d... returned with HTTP 200 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 18532 "-" "Go-http-client/1.1" 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 "-" "Go-http-client/1.1" 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu... 2019-09-17 10:46:03.150 22 INFO cinder.volume.api [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all volumes completed successfully. 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volu... returned with HTTP 200 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s... 2019-09-17 10:46:03.172 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-s... returned with HTTP 200 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap... 2019-09-17 10:46:03.197 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snap... returned with HTTP 200 2019-09-17 10:46:03.197 19 INFO cinder.volume.api [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all snapshots completed successfully. 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Initialize volume connection completed successfully. 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Terminate volume connection completed successfully. 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Remove volume export completed successfully. 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = self._create_container(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server query_string=query_string) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise ClientException.from_response(resp, 'Container PUT failed', body) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self._update_backup_error(backup, six.text_type(err)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.conn.put_container(container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.force_reraise() 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.put_container(backup.container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server service_token=self.service_token, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server tpool.Proxy(device_path)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = self._run_backup(context, backup, volume) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server volume_size_bytes) = self._prepare_backup(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 226, in _create_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 327, in _prepare_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 535, in backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py", line 315, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 414, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 425, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 502, in _run_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1061, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1722, in _retry 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1808, in put_container
2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Exception during message handling: ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29f... 401 Unauthorized AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Dear Gorka and Hervé Thanks for your hints. I have set the debug log level on radosgw. I will retest now and post here the results. Cheers Francois -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
On 20/09, Francois Scheurer wrote:
Dear Gorka and Hervé
Thanks for your hints.
I have set the debug log level on radosgw.
I will retest now and post here the results.
Cheers
Francois
Hi, Sorry, I may have missed something in the conversation, weren't you using Swift? I think you need to see the Swift logs as well, since that's the API service that complained about the authorization. Cheers, Gorka.
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Hi Gorka We have a swift endpoint set up on opentstack, which points to our ceph radosgw backend Radosgw provides s3 & swift. So the swift logs are here actually the radosgw logs. Cheers Francois On 9/20/19 2:46 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Dear Gorka and Hervé
Thanks for your hints.
I have set the debug log level on radosgw.
I will retest now and post here the results.
Cheers
Francois Hi,
Sorry, I may have missed something in the conversation, weren't you using Swift?
I think you need to see the Swift logs as well, since that's the API service that complained about the authorization.
Cheers, Gorka.
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
-- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
On 20/09, Francois Scheurer wrote:
Hi Gorka
We have a swift endpoint set up on opentstack, which points to our ceph radosgw backend
Radosgw provides s3 & swift.
So the swift logs are here actually the radosgw logs.
Hi, OK, thanks for the clarification. Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble with Incremental Backups on the Ceph backup driver. Cheers, Gorka.
Cheers
Francois
On 9/20/19 2:46 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Dear Gorka and Hervé
Thanks for your hints.
I have set the debug log level on radosgw.
I will retest now and post here the results.
Cheers
Francois Hi,
Sorry, I may have missed something in the conversation, weren't you using Swift?
I think you need to see the Swift logs as well, since that's the API service that complained about the authorization.
Cheers, Gorka.
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Hi Gorka
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble with Incremental Backups on the Ceph backup driver.
You are probably right. But I cannot answer that because I was not involve in that decision. Ok in the radosgw logs I see this: 2019-09-20 15:40:06.805529 7f19edb9b700 20 token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.267091 7f19edb9b700 5 Failed keystone auth from https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 BTW: our radosgw is configured to delegate user authentication to keystone. In keystone logs I see this: 2019-09-20 15:40:07.218 24 INFO keystone.token.provider [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being processed: token.user_id [f7c7296949f84a4387c5172808a0965b], token.expires_at[2019-09-21T13:40:07.000000Z], token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], token.methods[[u'password']], token.system[None], token.domain_id[None], token.project_id[4120792f50bc4cf2b4f97c4546462f06], token.trust_id[None], token.federated_groups[None], token.identity_provider_id[None], token.protocol_id[None], token.access_token_id[None],token.application_credential_id[None]. 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] GET http://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find trust: 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: 934ed82d2b14413899023da0bee6a953. So what happens is following: 1. when the user creates the cron trigger, mistral creates a trust 2. when the cron trigger executes the workflow, openstack create a volume snapshot (a rbd image) then copy it to swift (rgw) then delete the snapshot 3. when the execution finishes, if the cron trigger has no remaining executions scheduled, then mistral remove the cron trigger and the trust The problem is a racing issue: apprently the copying of the snapshot to swift run in the background and mistral removes the trust before the operation completes... That explains the error in keystone and also the cron trigger execution result which is "success" even if the resulting backup is actually "failed". To test this theory I set up the same cron trigger with more than one scheduled execution and the backups were suddenly created correctly ;-). So something need to be done on the code to deal with this racing issue. In the meantime, I will try to put a sleep action after the 'create backup' action. Best Regards Francois On 9/20/19 4:02 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Hi Gorka
We have a swift endpoint set up on opentstack, which points to our ceph radosgw backend
Radosgw provides s3 & swift.
So the swift logs are here actually the radosgw logs.
Hi,
OK, thanks for the clarification.
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble with Incremental Backups on the Ceph backup driver.
Cheers, Gorka.
Cheers
Francois
On 9/20/19 2:46 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Dear Gorka and Hervé
Thanks for your hints.
I have set the debug log level on radosgw.
I will retest now and post here the results.
Cheers
Francois Hi,
Sorry, I may have missed something in the conversation, weren't you using Swift?
I think you need to see the Swift logs as well, since that's the API service that complained about the authorization.
Cheers, Gorka.
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
-- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
On 20/09, Francois Scheurer wrote:
Hi Gorka
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble >with Incremental Backups on the Ceph backup driver.
You are probably right. But I cannot answer that because I was not involve in that decision.
Ok in the radosgw logs I see this:
2019-09-20 15:40:06.805529 7f19edb9b700 20 token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.267091 7f19edb9b700 5 Failed keystone auth from https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 BTW: our radosgw is configured to delegate user authentication to keystone.
In keystone logs I see this:
2019-09-20 15:40:07.218 24 INFO keystone.token.provider [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being processed: token.user_id [f7c7296949f84a4387c5172808a0965b], token.expires_at[2019-09-21T13:40:07.000000Z], token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], token.methods[[u'password']], token.system[None], token.domain_id[None], token.project_id[4120792f50bc4cf2b4f97c4546462f06], token.trust_id[None], token.federated_groups[None], token.identity_provider_id[None], token.protocol_id[None], token.access_token_id[None],token.application_credential_id[None]. 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] GET http://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find trust: 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: 934ed82d2b14413899023da0bee6a953.
So what happens is following:
1. when the user creates the cron trigger, mistral creates a trust 2. when the cron trigger executes the workflow, openstack create a volume snapshot (a rbd image) then copy it to swift (rgw) then delete the snapshot 3. when the execution finishes, if the cron trigger has no remaining executions scheduled, then mistral remove the cron trigger and the trust
The problem is a racing issue: apprently the copying of the snapshot to swift run in the background and mistral removes the trust before the operation completes...
That explains the error in keystone and also the cron trigger execution result which is "success" even if the resulting backup is actually "failed".
To test this theory I set up the same cron trigger with more than one scheduled execution and the backups were suddenly created correctly ;-).
So something need to be done on the code to deal with this racing issue.
In the meantime, I will try to put a sleep action after the 'create backup' action.
Hi, Congrats on figuring out the issue. :-) Instead of a sleep, which may get you through this issue but fall into a different one and won't return the right status code, you should probably have a loop checking the status of the backup and return a non zero status code if it ends up in "error" state. Cheers, Gorka.
Best Regards
Francois
On 9/20/19 4:02 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Hi Gorka
We have a swift endpoint set up on opentstack, which points to our ceph radosgw backend
Radosgw provides s3 & swift.
So the swift logs are here actually the radosgw logs.
Hi,
OK, thanks for the clarification.
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble with Incremental Backups on the Ceph backup driver.
Cheers, Gorka.
Cheers
Francois
On 9/20/19 2:46 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Dear Gorka and Hervé
Thanks for your hints.
I have set the debug log level on radosgw.
I will retest now and post here the results.
Cheers
Francois Hi,
Sorry, I may have missed something in the conversation, weren't you using Swift?
I think you need to see the Swift logs as well, since that's the API service that complained about the authorization.
Cheers, Gorka.
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Hi! I would kindly ask you to add [mistral] into the subject of the emails related to Mistral. I just saw this thread accidentally (since I can’t read everything) and missed it in the first place. On the issue itself… So yes, the discovery you made makes perfect sense. I agree that a workflow should probably be responsible for tracking a status of an operation. We’ve discussed a more generic solution in the past for similar situations but it seems to be virtually impossible to find it. If you have some ideas, please share. We can discuss it. Thanks Renat Akhmerov @Nokia On 23 Sep 2019, 14:41 +0700, Gorka Eguileor <geguileo@redhat.com>, wrote:
On 20/09, Francois Scheurer wrote:
Hi Gorka
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble >with Incremental Backups on the Ceph backup driver.
You are probably right. But I cannot answer that because I was not involve in that decision.
Ok in the radosgw logs I see this:
2019-09-20 15:40:06.805529 7f19edb9b700 20 token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.267091 7f19edb9b700 5 Failed keystone auth from https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 BTW: our radosgw is configured to delegate user authentication to keystone.
In keystone logs I see this:
2019-09-20 15:40:07.218 24 INFO keystone.token.provider [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being processed: token.user_id [f7c7296949f84a4387c5172808a0965b], token.expires_at[2019-09-21T13:40:07.000000Z], token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], token.methods[[u'password']], token.system[None], token.domain_id[None], token.project_id[4120792f50bc4cf2b4f97c4546462f06], token.trust_id[None], token.federated_groups[None], token.identity_provider_id[None], token.protocol_id[None], token.access_token_id[None],token.application_credential_id[None]. 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] GET http://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find trust: 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: 934ed82d2b14413899023da0bee6a953.
So what happens is following:
1. when the user creates the cron trigger, mistral creates a trust 2. when the cron trigger executes the workflow, openstack create a volume snapshot (a rbd image) then copy it to swift (rgw) then delete the snapshot 3. when the execution finishes, if the cron trigger has no remaining executions scheduled, then mistral remove the cron trigger and the trust
The problem is a racing issue: apprently the copying of the snapshot to swift run in the background and mistral removes the trust before the operation completes...
That explains the error in keystone and also the cron trigger execution result which is "success" even if the resulting backup is actually "failed".
To test this theory I set up the same cron trigger with more than one scheduled execution and the backups were suddenly created correctly ;-).
So something need to be done on the code to deal with this racing issue.
In the meantime, I will try to put a sleep action after the 'create backup' action.
Hi,
Congrats on figuring out the issue. :-)
Instead of a sleep, which may get you through this issue but fall into a different one and won't return the right status code, you should probably have a loop checking the status of the backup and return a non zero status code if it ends up in "error" state.
Cheers, Gorka.
Best Regards
Francois
On 9/20/19 4:02 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Hi Gorka
We have a swift endpoint set up on opentstack, which points to our ceph radosgw backend
Radosgw provides s3 & swift.
So the swift logs are here actually the radosgw logs.
Hi,
OK, thanks for the clarification.
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble with Incremental Backups on the Ceph backup driver.
Cheers, Gorka.
Cheers
Francois
On 9/20/19 2:46 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Dear Gorka and Hervé
Thanks for your hints.
I have set the debug log level on radosgw.
I will retest now and post here the results.
Cheers
Francois Hi,
Sorry, I may have missed something in the conversation, weren't you using Swift?
I think you need to see the Swift logs as well, since that's the API service that complained about the authorization.
Cheers, Gorka.
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
Hi Gorka and Renat Thanks you for your suggestions and sorry to have forgotten the [mistral] subject prefix .
Renat: workflow should probablybe responsible for tracking a status of an operation.
Gorka: Instead of a sleep, which may get you through this issue but fall into a different one and won't return the right status code, you should probably have a loop checking the status of the backup and return a non zero status code if it ends up in "error" state.
The idea of Gorka sounds good. If you look at the snapshot worflow of Jose Castro, you will find a similar snippet: #https://techblog.web.cern.ch/techblog/post/scheduled-snapshots/ #https://gitlab.cern.ch/cloud-infrastructure/mistral-workflows/raw/master/wor... | sed -e 's%action_region: "cern"%action_region: "ch-zh1"%'
instance_snapshot.yaml
stop_instance: description: 'Stops the instance for consistency' action: nova.servers_stop input: server: <% $.instance %> action_region: <% $.action_region %> on-success: - wait_for_stop_instance on-error: - error_task wait_for_stop_instance: description: 'Waits until the instance is shutoff to continue' action: nova.servers_find input: id: <% $.instance %> status: 'SHUTOFF' action_region: <% $.action_region %> retry: delay: 5 count: 40 on-success: - check_boot_source on-error: - error_task
We’ve discussed a more generic solution in the past for similar situations but it seems to be virtually impossible to find it.
Ok so it looks that this issue cannot be fixed with a small bugfix. It would require a feature extension. I can imagine that quite a few api calls from the different openstack modules/services are asynchronous and would require mistral to check their progress status every time in a different ad hoc manner. That would make the such a new feature in mistral quite expensive to implement. It would be great if every async call would return a job_id in a standard form by each service. So mistral would be able to track them in an uniform way. This would also allows openstack client to run in sync or async mode, according to the user need. But such a design requirement better need to be done at day one; it is likely too late to change all openstack services... However, there is a minor enhancement that could be done: let the user specify if a cron trigger need to auto-delete itself after its last execution or not. Keeping expired cron triggers could be nice for: -avoiding the such racing issues as with swift/radosgw -allowing the user to edit and reschedule a expired cron trigger What do you think? Best Regards Francois On 9/24/19 8:36 AM, Renat Akhmerov wrote:
Hi!
I would kindly ask you to add [mistral] into the subject of the emails related to Mistral. I just saw this thread accidentally (since I can’t read everything) and missed it in the first place.
On the issue itself… So yes, the discovery you made makes perfect sense. I agree that a workflow should probablybe responsible for tracking a status of an operation. We’ve discussed a more generic solution in the past for similar situations but it seems to be virtually impossible to find it. If you have some ideas, please share. We can discuss it.
Thanks
Renat Akhmerov @Nokia On 23 Sep 2019, 14:41 +0700, Gorka Eguileor <geguileo@redhat.com>, wrote:
On 20/09, Francois Scheurer wrote:
Hi Gorka
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble >with Incremental Backups on the Ceph backup driver.
You are probably right. But I cannot answer that because I was not involve in that decision.
Ok in the radosgw logs I see this:
2019-09-20 15:40:06.805529 7f19edb9b700 20 token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.267091 7f19edb9b700 5 Failed keystone auth from https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 BTW: our radosgw is configured to delegate user authentication to keystone.
In keystone logs I see this:
2019-09-20 15:40:07.218 24 INFO keystone.token.provider [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being processed: token.user_id [f7c7296949f84a4387c5172808a0965b], token.expires_at[2019-09-21T13:40:07.000000Z], token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], token.methods[[u'password']], token.system[None], token.domain_id[None], token.project_id[4120792f50bc4cf2b4f97c4546462f06], token.trust_id[None], token.federated_groups[None], token.identity_provider_id[None], token.protocol_id[None], token.access_token_id[None],token.application_credential_id[None]. 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] GET http://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find trust: 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: 934ed82d2b14413899023da0bee6a953.
So what happens is following:
1. when the user creates the cron trigger, mistral creates a trust 2. when the cron trigger executes the workflow, openstack create a volume snapshot (a rbd image) then copy it to swift (rgw) then delete the snapshot 3. when the execution finishes, if the cron trigger has no remaining executions scheduled, then mistral remove the cron trigger and the trust
The problem is a racing issue: apprently the copying of the snapshot to swift run in the background and mistral removes the trust before the operation completes...
That explains the error in keystone and also the cron trigger execution result which is "success" even if the resulting backup is actually "failed".
To test this theory I set up the same cron trigger with more than one scheduled execution and the backups were suddenly created correctly ;-).
So something need to be done on the code to deal with this racing issue.
In the meantime, I will try to put a sleep action after the 'create backup' action.
Hi,
Congrats on figuring out the issue. :-)
Instead of a sleep, which may get you through this issue but fall into a different one and won't return the right status code, you should probably have a loop checking the status of the backup and return a non zero status code if it ends up in "error" state.
Cheers, Gorka.
Best Regards
Francois
On 9/20/19 4:02 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote:
Hi Gorka
We have a swift endpoint set up on opentstack, which points to our ceph radosgw backend
Radosgw provides s3 & swift.
So the swift logs are here actually the radosgw logs.
Hi,
OK, thanks for the clarification.
Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble with Incremental Backups on the Ceph backup driver.
Cheers, Gorka.
Cheers
Francois
On 9/20/19 2:46 PM, Gorka Eguileor wrote:
On 20/09, Francois Scheurer wrote: > Dear Gorka and Hervé > > > Thanks for your hints. > > I have set the debug log level on radosgw. > > I will retest now and post here the results. > > > Cheers > > Francois Hi,
Sorry, I may have missed something in the conversation, weren't you using Swift?
I think you need to see the Swift logs as well, since that's the API service that complained about the authorization.
Cheers, Gorka.
> > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer@everyware.ch > web: http://www.everyware.ch --
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich
tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
--
EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@everyware.ch web: http://www.everyware.ch
participants (4)
-
Francois Scheurer
-
Gorka Eguileor
-
Herve Beraud
-
Renat Akhmerov