<div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 25, 2019 at 11:19 AM <<a href="mailto:iain.macdonnell@oracle.com">iain.macdonnell@oracle.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
On 4/25/19 1:05 AM, Damien Ciabrini wrote:<br>
> <br>
> <br>
> On Tue, Apr 23, 2019 at 1:18 AM Alex Schultz <<a href="mailto:aschultz@redhat.com" target="_blank">aschultz@redhat.com</a> <br>
> <mailto:<a href="mailto:aschultz@redhat.com" target="_blank">aschultz@redhat.com</a>>> wrote:<br>
> <br>
> On Mon, Apr 22, 2019 at 12:25 PM Ben Nemec <<a href="mailto:openstack@nemebean.com" target="_blank">openstack@nemebean.com</a><br>
> <mailto:<a href="mailto:openstack@nemebean.com" target="_blank">openstack@nemebean.com</a>>> wrote:<br>
> ><br>
> ><br>
> ><br>
> > On 4/22/19 12:53 PM, Alex Schultz wrote:<br>
> > > On Mon, Apr 22, 2019 at 11:28 AM Ben Nemec<br>
> <<a href="mailto:openstack@nemebean.com" target="_blank">openstack@nemebean.com</a> <mailto:<a href="mailto:openstack@nemebean.com" target="_blank">openstack@nemebean.com</a>>> wrote:<br>
> > >><br>
> > >><br>
> > >><br>
> > >> On 4/20/19 1:38 AM, Michele Baldessari wrote:<br>
> > >>> On Fri, Apr 19, 2019 at 03:20:44PM -0700,<br>
> <a href="mailto:iain.macdonnell@oracle.com" target="_blank">iain.macdonnell@oracle.com</a> <mailto:<a href="mailto:iain.macdonnell@oracle.com" target="_blank">iain.macdonnell@oracle.com</a>> wrote:<br>
> > >>>><br>
> > >>>> Today I discovered that this problem appears to be caused by<br>
> eventlet<br>
> > >>>> monkey-patching. I've created a bug for it:<br>
> > >>>><br>
> > >>>> <a href="https://bugs.launchpad.net/nova/+bug/1825584" rel="noreferrer" target="_blank">https://bugs.launchpad.net/nova/+bug/1825584</a><br>
> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_nova_-2Bbug_1825584&d=DwMFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=fpephoqJ2hzc-jFPr6Rtupc2U02HBjBRd-_Lq66zQBk&s=Vb1Yw7ZkrglH0AMBZSHNfuilS5gBwh9yF2o2trkXYyM&e=" rel="noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_nova_-2Bbug_1825584&d=DwMFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=fpephoqJ2hzc-jFPr6Rtupc2U02HBjBRd-_Lq66zQBk&s=Vb1Yw7ZkrglH0AMBZSHNfuilS5gBwh9yF2o2trkXYyM&e=</a>><br>
> > >>><br>
> > >>> Hi,<br>
> > >>><br>
> > >>> just for completeness we see this very same issue also with<br>
> > >>> mistral (actually it was the first service where we noticed<br>
> the missed<br>
> > >>> heartbeats). iirc Alex Schultz mentioned seeing it in ironic<br>
> as well,<br>
> > >>> although I have not personally observed it there yet.<br>
> > >><br>
> > >> Is Mistral also mixing eventlet monkeypatching and WSGI?<br>
> > >><br>
> > ><br>
> > > Looks like there is monkey patching, however we noticed it with the<br>
> > > engine/executor. So it's likely not just wsgi. I think I also<br>
> saw it<br>
> > > in the ironic-conductor, though I'd have to try it out again. I'll<br>
> > > spin up an undercloud today and see if I can get a more<br>
> complete list<br>
> > > of affected services. It was pretty easy to reproduce.<br>
> ><br>
> > Okay, I asked because if there's no WSGI/Eventlet combination<br>
> then this<br>
> > may be different from the Nova issue that prompted this thread. It<br>
> > sounds like that was being caused by a bad interaction between<br>
> WSGI and<br>
> > some Eventlet timers. If there's no WSGI involved then I wouldn't<br>
> expect<br>
> > that to happen.<br>
> ><br>
> > I guess we'll see what further investigation turns up, but based<br>
> on the<br>
> > preliminary information there may be two bugs here.<br>
> ><br>
> <br>
> So I wasn't able to reproduce the ironic issues yet. But it's the<br>
> mistral executor and nova-api which exhibit the issue on the<br>
> undercloud.<br>
> <br>
> mistral/executor.log:2019-04-22 22:40:58.321 7 ERROR<br>
> oslo.messaging._drivers.impl_rabbit [-]<br>
> [b7b4bc40-767c-4de1-b77b-6a5822f6beed] AMQP server on<br>
> undercloud-0.ctlplane.localdomain:5672 is unreachable: [Errno 104]<br>
> Connection reset by peer. Trying again in 1 seconds.:<br>
> ConnectionResetError: [Errno 104] Connection reset by peer<br>
> <br>
> <br>
> nova/nova-api.log:2019-04-22 22:38:11.530 19 ERROR<br>
> oslo.messaging._drivers.impl_rabbit<br>
> [req-d7767aed-e32d-43db-96a8-c0509bfb1cfe<br>
> 9ac89090d2d24949b9a1e01b1afb14cc 7becac88cbae4b3b962ecccaf536effe -<br>
> default default] [c0f3fe7f-db89-42c6-95bd-f367a4fbf680] AMQP server on<br>
> undercloud-0.ctlplane.localdomain:5672 is unreachable: Server<br>
> unexpectedly closed connection. Trying again in 1 seconds.: OSError:<br>
> Server unexpectedly closed connection<br>
> <br>
> The errors being thrown are different perhaps it is two different<br>
> problems.<br>
> <br>
> <br>
> Correct, I think our original issue with erratic AMQP hearbeats and mod_wsgi<br>
> was due to a change in how we run healthcheck in Stein in TripleO-deployed<br>
> environments, so different to what Iain originally experienced it seems...<br>
> <br>
> For the record, up to Rocky, we used to run healthcheck scripts<br>
> every 30 seconds, which guarantees that eventlet will wake up and<br>
> send an AMQP heartbeat packet if a service had no AMQP traffic in the last<br>
> 15s. It also guarantees that any incoming AMQP heartbeat packet from<br>
> rabbitmq will be processed in at most 30s.<br>
> <br>
> In Stein, our healthchecks are now triggered via systemd timers, and the<br>
> current time setting is too high to guarantee that mod_wsgi will always<br>
> wake up on time to send/receive AMQP heartbeats to/from rabbitmq<br>
> when there's no traffic.<br>
<br>
If I'm reading this right, it sounds like the periodic healthcheck is <br>
working around the underlying issue of the heartbeats not happening by <br>
themselves (due to eventlet monkey-patching somehow interfering with <br>
threading). The whole point of the heartbeats is to continually maintain <br>
the connection while there's no real traffic.<br>
<br>
~iain<br>
<br>
<br>
</blockquote></div><div><br></div><div>I would agree with Iain here - it sounds as if something in mod_wsgi (or elsewhere) is blocking in the os outside of the purview of eventlet.</div><div><br></div><div>FWIW here's the oslo.messaging thread responsible for sending heartbeats from the RPC client:</div><div><br></div><div><a href="https://opendev.org/openstack/oslo.messaging/src/branch/stable/stein/oslo_messaging/_drivers/impl_rabbit.py#L897">https://opendev.org/openstack/oslo.messaging/src/branch/stable/stein/oslo_messaging/_drivers/impl_rabbit.py#L897</a></div><div><br></div><div>Can you verify that the event.wait() call at the bottom of the loop is not waking up as per the passed in _heartbeat_wait_timeout?<br></div><div>thanks<br></div><br>-- <br><div dir="ltr" class="gmail_signature">Ken Giusti (<a href="mailto:kgiusti@gmail.com" target="_blank">kgiusti@gmail.com</a>)</div></div></div>