<div dir="ltr"><div dir="ltr"><div dir="ltr">As mentioned in Gorka, sql connection is using pymysql. <br></div><div dir="ltr"><br></div><div dir="ltr">And I increased max_pool_size to 50(I think gorka mistaken max_pool_size to max_retries.), <br></div><div dir="ltr">but it was the same that the cinder-volume stucked from the time that 4~50 volumes were deleted.<br><br>There seems to be a problem with the cinder rbd volume driver, so I tested to delete 200 volumes continously <br></div><div dir="ltr">by used only RBDClient and RBDProxy. There was no problem at this time.<br><br>I think there is some code in the cinder-volume that causes a hang but it's too hard to find now.</div><div dir="ltr"><br></div><div>Thanks.<br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">2019년 2월 12일 (화) 오후 6:24, Gorka Eguileor <<a href="mailto:geguileo@redhat.com">geguileo@redhat.com</a>>님이 작성:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 12/02, Arne Wiebalck wrote:<br>
> Jae,<br>
><br>
> One other setting that caused trouble when bulk deleting cinder volumes was the<br>
> DB connection string: we did not configure a driver and hence used the Python<br>
> mysql wrapper instead … essentially changing<br>
><br>
> connection = mysql://cinder:<pw>@<host>:<port>/cinder<br>
><br>
> to<br>
><br>
> connection = mysql+pymysql://cinder:<pw>@<host>:<port>/cinder<br>
><br>
> solved the parallel deletion issue for us.<br>
><br>
> All details in the last paragraph of [1].<br>
><br>
> HTH!<br>
> Arne<br>
><br>
> [1] <a href="https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/" rel="noreferrer" target="_blank">https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/</a><br>
><br>
<br>
Good point, using a C mysql connection library will induce thread<br>
starvation. This was thoroughly discussed, and the default changed,<br>
like 2 years ago... So I assumed we all changed that.<br>
<br>
Something else that could be problematic when receiving many concurrent<br>
requests on any Cinder service is the number of concurrent DB<br>
connections, although we also changed this a while back to 50. This is<br>
set as sql_max_retries or max_retries (depending on the version) in the<br>
"[database]" section.<br>
<br>
Cheers,<br>
Gorka.<br>
<br>
<br>
><br>
><br>
> > On 12 Feb 2019, at 01:07, Jae Sang Lee <<a href="mailto:hyangii@gmail.com" target="_blank">hyangii@gmail.com</a>> wrote:<br>
> ><br>
> > Hello,<br>
> ><br>
> > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I wanted to have good results,<br>
> > but this time I did not get a response after removing 41 volumes. This environment variable did not fix<br>
> > the cinder-volume stopping.<br>
> ><br>
> > Restarting the stopped cinder-volume will delete all volumes that are in deleting state while running the clean_up function.<br>
> > Only one volume in the deleting state, I force the state of this volume to be available, and then delete it, all volumes will be deleted.<br>
> ><br>
> > This result was the same for 3 consecutive times. After removing dozens of volumes, the cinder-volume was down,<br>
> > and after the restart of the service, 199 volumes were deleted and one volume was manually erased.<br>
> ><br>
> > If you have a different approach to solving this problem, please let me know.<br>
> ><br>
> > Thanks.<br>
> ><br>
> > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck <<a href="mailto:Arne.Wiebalck@cern.ch" target="_blank">Arne.Wiebalck@cern.ch</a>>님이 작성:<br>
> > Jae,<br>
> ><br>
> >> On 11 Feb 2019, at 11:39, Jae Sang Lee <<a href="mailto:hyangii@gmail.com" target="_blank">hyangii@gmail.com</a>> wrote:<br>
> >><br>
> >> Arne,<br>
> >><br>
> >> I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports<br>
> >> like "Deleted <vol-uuid> from trash for backend '<backends-name>'"<br>
> >><br>
> >> The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large<br>
> >> number of volumes.<br>
> ><br>
> > Hmm, from cinder’s point of view, the deletion should be more or less instantaneous, so it should be able to “delete”<br>
> > many more volumes before getting stuck.<br>
> ><br>
> > The periodic task, however, will go through the volumes one by one, so if you delete many at the same time,<br>
> > volumes may pile up in the trash (for some time) before the tasks gets round to delete them. This should not affect<br>
> > c-vol, though.<br>
> ><br>
> >> I will try to adjust the number of thread pools by adjusting the environment variables with your advices<br>
> >><br>
> >> Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume?<br>
> ><br>
> > Deleting a volume ties up a thread for the duration of the deletion (which is synchronous and can hence take very<br>
> > long for ). If you have too many deletions going on at the same time, you run out of threads and c-vol will eventually<br>
> > time out. FWIU, creation basically works the same way, but it is almost instantaneous, hence the risk of using up all<br>
> > threads is simply lower (Gorka may correct me here :-).<br>
> ><br>
> > Cheers,<br>
> > Arne<br>
> ><br>
> >><br>
> >><br>
> >> Thanks.<br>
> >><br>
> >><br>
> >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck <<a href="mailto:Arne.Wiebalck@cern.ch" target="_blank">Arne.Wiebalck@cern.ch</a>>님이 작성:<br>
> >> Jae,<br>
> >><br>
> >> To make sure deferred deletion is properly working: when you delete individual large volumes<br>
> >> with data in them, do you see that<br>
> >> - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time?<br>
> >> - that the volume shows up in trash (with “rbd trash ls”)?<br>
> >> - the periodic task reports it is deleting volumes from the trash?<br>
> >><br>
> >> Another option to look at is “backend_native_threads_pool_size": this will increase the number<br>
> >> of threads to work on deleting volumes. It is independent from deferred deletion, but can also<br>
> >> help with situations where Cinder has more work to do than it can cope with at the moment.<br>
> >><br>
> >> Cheers,<br>
> >> Arne<br>
> >><br>
> >><br>
> >><br>
> >>> On 11 Feb 2019, at 09:47, Jae Sang Lee <<a href="mailto:hyangii@gmail.com" target="_blank">hyangii@gmail.com</a>> wrote:<br>
> >>><br>
> >>> Yes, I added your code to pike release manually.<br>
> >>><br>
> >>><br>
> >>><br>
> >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck <<a href="mailto:Arne.Wiebalck@cern.ch" target="_blank">Arne.Wiebalck@cern.ch</a>>님이 작성:<br>
> >>> Hi Jae,<br>
> >>><br>
> >>> You back ported the deferred deletion patch to Pike?<br>
> >>><br>
> >>> Cheers,<br>
> >>> Arne<br>
> >>><br>
> >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee <<a href="mailto:hyangii@gmail.com" target="_blank">hyangii@gmail.com</a>> wrote:<br>
> >>> ><br>
> >>> > Hello,<br>
> >>> ><br>
> >>> > I recently ran a volume deletion test with deferred deletion enabled on the pike release.<br>
> >>> ><br>
> >>> > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it.<br>
> >>> ><br>
> >>> > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api.<br>
> >>> ><br>
> >>> > If these test results are my fault, please let me know the correct test method.<br>
> >>> ><br>
> >>><br>
> >>> --<br>
> >>> Arne Wiebalck<br>
> >>> CERN IT<br>
> >>><br>
> >><br>
> >> --<br>
> >> Arne Wiebalck<br>
> >> CERN IT<br>
> >><br>
> ><br>
> > --<br>
> > Arne Wiebalck<br>
> > CERN IT<br>
> ><br>
><br>
</blockquote></div>