On Tue May 10 2022 10:44:32 GMT-0700 (Pacific Daylight Time), Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane <stephenfin@redhat.com> wrote ----
On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote:
On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote:
Hi all,
It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630
Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;)
Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest doesn't use keystoneauth1 like everyone else so they didn't get the global request ID stuff for free. We need to pass some additional context when logging to keep this new version of oslo.log happy.
If you've thoughts on this, please let me know on the review. The other option is to revert the change on oslo.log and cut a new release, blacklisting this one.
I am ok with tempest change but IMO oslo.log changes are backward-incompatible with not much value? may be it should check if there is global request-id then of otherwise skip?
+1, it feels like this is a bug in oslo.log. Prior to the global_request_id change, it did the following test before using the logging_context_format_string [2]: if record.__dict__.get('request_id'): fmt = self.conf.logging_context_format_string else: fmt = self.conf.logging_default_format_string which protected against a KeyError in the case of request_id. It seems like something similar needs to be added for global_request_id. -melanie [2] https://github.com/openstack/oslo.log/blob/ebdee7f39920ad5b4268ee296952432b0...
[1] https://review.opendev.org/c/openstack/tempest/+/841310
Stephen
[1] https://review.opendev.org/c/openstack/oslo.log/+/838190