[openstack-dev] [tripleo] rh1 outage today
Juan Antonio Osorio
jaosorior at gmail.com
Sat Oct 28 07:07:16 UTC 2017
Thanks for the postmortem; it's always a good read tp learn stuff :)
On 28 Oct 2017 00:11, "Ben Nemec" <openstack at nemebean.com> wrote:
Hi all,
As you may or may not have noticed all ovb jobs on rh1 started failing
sometime last night. After some investigation today I found a few issues.
First, our nova db archiving wasn't working. This was due to the
auto-increment counter issue described by melwitt in
http://lists.openstack.org/pipermail/openstack-dev/2017-Sept
ember/122903.html Deleting the problematic rows from the shadow table got
us past that.
On another db-related note, we seem to have turned ceilometer back on at
some point in rh1. I think that was intentional to avoid notification
queues backing up, but it led to a different problem. We had approximately
400 GB of mongodb data from ceilometer that we don't actually care about.
I cleaned that up and set a TTL in ceilometer so hopefully this won't
happen again.
Is there an alarm or something we could set to get notified about this kind
of stuff? Or better yet, something we could automate to avoid this? What's
usimg mongodb nowadays?
Unfortunately neither of these things completely resolved the extreme
slowness in the cloud that was causing every testenv to fail. After trying
a number of things that made no difference, the culprit seems to have been
rabbitmq. There was nothing obviously wrong with it according to the web
interface, the queues were all short and messages seemed to be getting
delivered. However, when I ran rabbitmqctl status at the CLI it reported
that the node was down. Since something was clearly wrong I went ahead and
restarted it. After that everything seems to be back to normal.
Same questiom as above, could we set and alarm or automate the node
recovery?
I'm not sure exactly what the cause of all this was. We did get kind of
inundated with jobs yesterday after a zuul restart which I think is what
probably pushed us over the edge, but that has happened before without
bringing the cloud down. It was probably a combination of some previously
unnoticed issues stacking up over time and the large number of testenvs
requested all at once.
In any case, testenvs are creating successfully again and the jobs in the
queue look good so far. If you notice any problems please let me know
though. I'm hoping this will help with the job timeouts, but that remains
to be seen.
-Ben
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20171028/e2b6cb51/attachment.html>
More information about the OpenStack-dev
mailing list