Hey Abhishek! On 31.01.24 10:35, Abhishek Kekane wrote:
On 31.01.24 09:13, Abhishek Kekane wrote:
By design copy-image import workflow uses a common uploading mechanism for all stores, so yes it is a known limitation if it is not using multipart upload for s3 backend. Feel free to propose enhancement for the same or participate in the upcoming PTG 'April 8-12, 2024' to discuss the improvements for this behavior.
Abishek, I suppose the copy-image is done via this helper here and which are were referring to? https://github.com/openstack/glance/blob/master/glance/async_/flows/_interna...
Hi Christian,
The helper you mention above is responsible to download the existing data at common storage known as staging area (configured using os_glance_staging_store in glance-api.conf) and from there it will be imported to the destination/target store. However debugging further I found that it internally calls store.add method, which means in fact it is using a particular driver call only.
I suspect [1] is where it is using single_part as an upload for s3 while copying the image, because we are not passing the size of an existing image to the import call.
I think this is driver specific improvement, and requires additional effort to make it work.
[1] https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/...
I cannot (quickly) follow your debugging / the calls you mentioned. Could you please raise a bug with your findings to "fix" this? Seems like this is not intended behavior? Here the image size is actually provided when the image is fetched to the staging store: https://github.com/openstack/glance/blob/b6b9f043ffe664c643456912148648ecc0d... But what is the next step then to upload the "staged" image into the new target store? In any case, I tend to also disagree that, if missing image_size is the issue, providing it to the add call is a S3 driver specific thing. Other object storages (GCS, Azure Blob, ...) might "like" to know the size as well to adjust their upload strategy. Regards Christian