Gorka, I found the default size of threadpool is 20 in source code. However, I will try to increase this size. Thanks a lot. 2019년 2월 11일 (월) 오후 6:21, Gorka Eguileor <geguileo@redhat.com>님이 작성:
On 11/02, Jae Sang Lee wrote:
Yes, I added your code to pike release manually.
Hi,
Did you enable the feature?
If I remember correctly, 50 is the default value of the native thread pool size, so it seems that the 50 available threads are busy deleting the volumes.
I would double check that the feature is actually enabled (enable_deferred_deletion = True in the backend section configuration and checking the logs to see if there are any messages indicating that a volume is being deleted from the trash), and increase the thread pool size. You can change it with environmental variable EVENTLET_THREADPOOL_SIZE.
Cheers, Gorka.
2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck <Arne.Wiebalck@cern.ch>님이 작성:
Hi Jae,
You back ported the deferred deletion patch to Pike?
Cheers, Arne
On 11 Feb 2019, at 07:54, Jae Sang Lee <hyangii@gmail.com> wrote:
Hello,
I recently ran a volume deletion test with deferred deletion enabled
on
the pike release.
We experienced a cinder-volume hung when we were deleting a large
amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it.
However, while deleting 200 volumes, after 50 volumes, the
cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api.
If these test results are my fault, please let me know the correct
test method.
-- Arne Wiebalck CERN IT