<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;color:#333333">Oh, really sorry, I was looking at your answer from my mobile mailing app and it didn't shows, sorry ^^<br><br>Many thanks for your help!</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le mar. 11 juin 2019 à 14:13, Carlos Goncalves <<a href="mailto:cgoncalves@redhat.com">cgoncalves@redhat.com</a>> a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You can find the commit hash from the link I provided. The patch is<br>
available from Queens so it is also available in Stein.<br>
<br>
On Tue, Jun 11, 2019 at 2:10 PM Gaël THEROND <<a href="mailto:gael.therond@gmail.com" target="_blank">gael.therond@gmail.com</a>> wrote:<br>
><br>
> Ok nice, do you have the commit hash? I would look at it and validate that it have been committed to Stein too so I could bump my service to stein using Kolla.<br>
><br>
> Thanks!<br>
><br>
> Le mar. 11 juin 2019 à 12:59, Carlos Goncalves <<a href="mailto:cgoncalves@redhat.com" target="_blank">cgoncalves@redhat.com</a>> a écrit :<br>
>><br>
>> On Mon, Jun 10, 2019 at 3:14 PM Gaël THEROND <<a href="mailto:gael.therond@gmail.com" target="_blank">gael.therond@gmail.com</a>> wrote:<br>
>> ><br>
>> > Hi guys,<br>
>> ><br>
>> > Just a quick question regarding this bug, someone told me that it have been patched within stable/rocky, BUT, were you talking about the openstack/octavia repositoy or the openstack/kolla repository?<br>
>><br>
>> Octavia.<br>
>><br>
>> <a href="https://review.opendev.org/#/q/Ief97ddda8261b5bbc54c6824f90ae9c7a2d81701" rel="noreferrer" target="_blank">https://review.opendev.org/#/q/Ief97ddda8261b5bbc54c6824f90ae9c7a2d81701</a><br>
>><br>
>> ><br>
>> > Many Thanks!<br>
>> ><br>
>> > Le mar. 4 juin 2019 à 15:19, Gaël THEROND <<a href="mailto:gael.therond@gmail.com" target="_blank">gael.therond@gmail.com</a>> a écrit :<br>
>> >><br>
>> >> Oh, that's perfect so, I'll just update my image and my platform as we're using kolla-ansible and that's super easy.<br>
>> >><br>
>> >> You guys rocks!! (Pun intended ;-)).<br>
>> >><br>
>> >> Many many thanks to all of you, that will real back me a lot regarding the Octavia solidity and Kolla flexibility actually ^^.<br>
>> >><br>
>> >> Le mar. 4 juin 2019 à 15:17, Carlos Goncalves <<a href="mailto:cgoncalves@redhat.com" target="_blank">cgoncalves@redhat.com</a>> a écrit :<br>
>> >>><br>
>> >>> On Tue, Jun 4, 2019 at 3:06 PM Gaël THEROND <<a href="mailto:gael.therond@gmail.com" target="_blank">gael.therond@gmail.com</a>> wrote:<br>
>> >>> ><br>
>> >>> > Hi Lingxian Kong,<br>
>> >>> ><br>
>> >>> > That’s actually very interesting as I’ve come to the same conclusion this morning during my investigation and was starting to think about a fix, which it seems you already made!<br>
>> >>> ><br>
>> >>> > Is there a reason why it didn’t was backported to rocky?<br>
>> >>><br>
>> >>> The patch was merged in master branch during Rocky development cycle,<br>
>> >>> hence included in stable/rocky as well.<br>
>> >>><br>
>> >>> ><br>
>> >>> > Very helpful, many many thanks to you you clearly spare me hours of works! I’ll get a review of your patch and test it on our lab.<br>
>> >>> ><br>
>> >>> > Le mar. 4 juin 2019 à 11:06, Gaël THEROND <<a href="mailto:gael.therond@gmail.com" target="_blank">gael.therond@gmail.com</a>> a écrit :<br>
>> >>> >><br>
>> >>> >> Hi Felix,<br>
>> >>> >><br>
>> >>> >> « Glad » you had the same issue before, and yes of course I looked at the HM logs which is were I actually found out that this event was triggered by octavia (Beside the DB data that validated that) here is my log trace related to this event, It doesn't really shows major issue IMHO.<br>
>> >>> >><br>
>> >>> >> Here is the stacktrace that our octavia service archived for our both controllers servers, with the initial loadbalancer creation trace (Worker.log) and both controllers triggered task (Health-Manager.log).<br>
>> >>> >><br>
>> >>> >> <a href="http://paste.openstack.org/show/7z5aZYu12Ttoae3AOhwF/" rel="noreferrer" target="_blank">http://paste.openstack.org/show/7z5aZYu12Ttoae3AOhwF/</a><br>
>> >>> >><br>
>> >>> >> I well may have miss something in it, but I don't see something strange on from my point of view.<br>
>> >>> >> Feel free to tell me if you spot something weird.<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> Le mar. 4 juin 2019 à 10:38, Felix Hüttner <felix.huettner@mail.schwarz> a écrit :<br>
>> >>> >>><br>
>> >>> >>> Hi Gael,<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> we had a similar issue in the past.<br>
>> >>> >>><br>
>> >>> >>> You could check the octiava healthmanager log (should be on the same node where the worker is running).<br>
>> >>> >>><br>
>> >>> >>> This component monitors the status of the Amphorae and restarts them if they don’t trigger a callback after a specific time. This might also happen if there is some connection issue between the two components.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> But normally it should at least restart the LB with new Amphorae…<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> Hope that helps<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> Felix<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> From: Gaël THEROND <<a href="mailto:gael.therond@gmail.com" target="_blank">gael.therond@gmail.com</a>><br>
>> >>> >>> Sent: Tuesday, June 4, 2019 9:44 AM<br>
>> >>> >>> To: Openstack <<a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>><br>
>> >>> >>> Subject: [OCTAVIA][ROCKY] - MASTER & BACKUP instances unexpectedly deleted by octavia<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> Hi guys,<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> I’ve a weird situation here.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> I smoothly operate a large scale multi-region Octavia service using the default amphora driver which imply the use of nova instances as loadbalancers.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> Everything is running really well and our customers (K8s and traditional users) are really happy with the solution so far.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> However, yesterday one of those customers using the loadbalancer in front of their ElasticSearch cluster poked me because this loadbalancer suddenly passed from ONLINE/OK to ONLINE/ERROR, meaning the amphoras were no longer available but yet the anchor/member/pool and listeners settings were still existing.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> So I investigated and found out that the loadbalancer amphoras have been destroyed by the octavia user.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> The weird part is, both the master and the backup instance have been destroyed at the same moment by the octavia service user.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> Is there specific circumstances where the octavia service could decide to delete the instances but not the anchor/members/pool ?<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> It’s worrying me a bit as there is no clear way to trace why does Octavia did take this action.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> I digged within the nova and Octavia DB in order to correlate the action but except than validating my investigation it doesn’t really help as there are no clue of why the octavia service did trigger the deletion.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> If someone have any clue or tips to give me I’ll be more than happy to discuss this situation.<br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>><br>
>> >>> >>> Cheers guys!<br>
>> >>> >>><br>
>> >>> >>> Hinweise zum Datenschutz finden Sie hier.<br>
</blockquote></div>