[Openstack-operators] rsyslog update caused services to break?
kispear at gmail.com
Tue Oct 14 00:44:39 UTC 2014
We had this issue in Havana and applying the patch from that gist
fixed the issue for us. We just noticed last week that it has returned
in our upgraded Trusty/Icehouse cloud since we neglected to repatch
the new eventlet package, and patching *that* fixed it again for us.
I reported an issue against eventlet for this a while ago, but looks
like it has since been closed:
On 14 October 2014 02:17, Joe Topjian <joe at topjian.net> wrote:
> That's really interesting - thanks for the link.
> We've been able to narrow down why didn't see this with Cinder or Swift: in
> short, Swift isn't using Oslo (AFAIK) and we had some previous logging
> issues with Cinder in Havana so we altered the logging setup a bit.
> On Mon, Oct 13, 2014 at 3:05 AM, Francois Deppierraz
> <francois at ctrlaltdel.ch> wrote:
>> Hi Joe,
>> Yes, same problem here running Ubuntu 14.04.
>> The symptom is nova-api, nova-conductor and glance-api eating all CPU
>> without responding to API requests anymore.
>> It is possible to reproduce it thanks to the following script.
>> On 11. 10. 14 00:40, Joe Topjian wrote:
>> > Hello,
>> > This morning we noticed various nova, glance, and keystone services (not
>> > cinder or swift) not working in two different clouds and required a
>> > restart.
>> > We thought it was a network issue since one of the only commonalities
>> > between the two clouds was that they are on the same network.
>> > Then later in the day I logged into a test cloud on a totally separate
>> > network and had the same problem.
>> > Looking at all three environments, the commonality is now that they have
>> > Ubuntu security updates automatically applied in the morning and this
>> > morning rsyslog was patched and restarted.
>> > I found this oslo bug that kind of sounds like the issue we saw:
>> > https://bugs.launchpad.net/oslo.log/+bug/1076466
>> > Doing further investigation, log files do indeed show a lack of entries
>> > for various services/daemons until they were restarted.
>> > Has anyone else run into this? Maybe even this morning, too? :)
>> > Thanks,
>> > Joe
>> > _______________________________________________
>> > OpenStack-operators mailing list
>> > OpenStack-operators at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
More information about the OpenStack-operators