[requirements][oslo.log] failure to update to oslo.log===4.8.0
Hi all, It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630 -- Matthew Thode
On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote:
Hi all,
It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630
Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;) Stephen [1] https://review.opendev.org/c/openstack/oslo.log/+/838190
On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote:
On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote:
Hi all,
It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630
Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;)
Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest doesn't use keystoneauth1 like everyone else so they didn't get the global request ID stuff for free. We need to pass some additional context when logging to keep this new version of oslo.log happy. If you've thoughts on this, please let me know on the review. The other option is to revert the change on oslo.log and cut a new release, blacklisting this one. Stephen [1] https://review.opendev.org/c/openstack/tempest/+/841310
Stephen
[1] https://review.opendev.org/c/openstack/oslo.log/+/838190
---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane <stephenfin@redhat.com> wrote ----
On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote:
On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote:
Hi all,
It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630
Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;)
Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest doesn't use keystoneauth1 like everyone else so they didn't get the global request ID stuff for free. We need to pass some additional context when logging to keep this new version of oslo.log happy.
If you've thoughts on this, please let me know on the review. The other option is to revert the change on oslo.log and cut a new release, blacklisting this one.
I am ok with tempest change but IMO oslo.log changes are backward-incompatible with not much value? may be it should check if there is global request-id then of otherwise skip? -gmann
Stephen
[1] https://review.opendev.org/c/openstack/tempest/+/841310
Stephen
[1] https://review.opendev.org/c/openstack/oslo.log/+/838190
On Tue May 10 2022 10:44:32 GMT-0700 (Pacific Daylight Time), Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane <stephenfin@redhat.com> wrote ----
On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote:
On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote:
Hi all,
It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630
Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;)
Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest doesn't use keystoneauth1 like everyone else so they didn't get the global request ID stuff for free. We need to pass some additional context when logging to keep this new version of oslo.log happy.
If you've thoughts on this, please let me know on the review. The other option is to revert the change on oslo.log and cut a new release, blacklisting this one.
I am ok with tempest change but IMO oslo.log changes are backward-incompatible with not much value? may be it should check if there is global request-id then of otherwise skip?
+1, it feels like this is a bug in oslo.log. Prior to the global_request_id change, it did the following test before using the logging_context_format_string [2]: if record.__dict__.get('request_id'): fmt = self.conf.logging_context_format_string else: fmt = self.conf.logging_default_format_string which protected against a KeyError in the case of request_id. It seems like something similar needs to be added for global_request_id. -melanie [2] https://github.com/openstack/oslo.log/blob/ebdee7f39920ad5b4268ee296952432b0...
[1] https://review.opendev.org/c/openstack/tempest/+/841310
Stephen
[1] https://review.opendev.org/c/openstack/oslo.log/+/838190
Hello, Le mer. 11 mai 2022 à 00:20, melanie witt <melwittt@gmail.com> a écrit :
On Tue May 10 2022 10:44:32 GMT-0700 (Pacific Daylight Time), Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane < stephenfin@redhat.com> wrote ----
On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote:
On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote:
Hi all,
It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630
Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;)
Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest doesn't use keystoneauth1 like everyone else so they didn't get the global request ID stuff for free. We need to pass some additional context when logging to keep this new version of oslo.log happy.
If you've thoughts on this, please let me know on the review. The other option is to revert the change on oslo.log and cut a new release, blacklisting this one.
I am ok with tempest change but IMO oslo.log changes are backward-incompatible with not much value? may be it should check if there is global request-id then of otherwise skip?
I agree with that, we need to better handle this point in oslo.log.
+1, it feels like this is a bug in oslo.log. Prior to the global_request_id change, it did the following test before using the logging_context_format_string [2]:
if record.__dict__.get('request_id'): fmt = self.conf.logging_context_format_string else: fmt = self.conf.logging_default_format_string
which protected against a KeyError in the case of request_id. It seems like something similar needs to be added for global_request_id.
If we apply the same logic for `global_request_id` then we will open the door of an implicit remplacement of the used format. By example if `request_id` is given and not `global_request_id` then we will override the context format with the default format implicitly. I'd rather suggest to init `global_request` with a default value (`None`) as we did previously with other context var format => https://opendev.org/openstack/oslo.log/commit/c50d3d633ba Thoughts?
-melanie
[2]
https://github.com/openstack/oslo.log/blob/ebdee7f39920ad5b4268ee296952432b0...
[1] https://review.opendev.org/c/openstack/tempest/+/841310
Stephen
[1] https://review.opendev.org/c/openstack/oslo.log/+/838190
-- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud
I just submitted a patch to fix this issue in the oslo.log side, reviews are welcomes https://review.opendev.org/c/openstack/oslo.log/+/841383 Le mer. 11 mai 2022 à 10:36, Herve Beraud <hberaud@redhat.com> a écrit :
Hello,
Le mer. 11 mai 2022 à 00:20, melanie witt <melwittt@gmail.com> a écrit :
On Tue May 10 2022 10:44:32 GMT-0700 (Pacific Daylight Time), Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
---- On Tue, 10 May 2022 12:24:08 -0500 Stephen Finucane < stephenfin@redhat.com> wrote ----
On Tue, 2022-05-10 at 18:03 +0100, Stephen Finucane wrote:
On Tue, 2022-05-10 at 09:56 -0500, Matthew Thode wrote:
Hi all,
It looks like the latest oslo.log update is failing to pass tempest. If anyone is around to look I'd appreciate it. https://review.opendev.org/840630
Repeating from the review, this seems to be caused by [1]. I'm not entirely sure why Tempest specifically is unhappy with this. I'll try to find time to look at this. Hopefully Takashi-san will beat me to it though, as the author of that change ;)
Actually, maybe not that hard to solve. I think [1] will fix the issue. Tempest doesn't use keystoneauth1 like everyone else so they didn't get the global request ID stuff for free. We need to pass some additional context when logging to keep this new version of oslo.log happy.
If you've thoughts on this, please let me know on the review. The other option is to revert the change on oslo.log and cut a new release, blacklisting this one.
I am ok with tempest change but IMO oslo.log changes are backward-incompatible with not much value? may be it should check if there is global request-id then of otherwise skip?
I agree with that, we need to better handle this point in oslo.log.
+1, it feels like this is a bug in oslo.log. Prior to the global_request_id change, it did the following test before using the logging_context_format_string [2]:
if record.__dict__.get('request_id'): fmt = self.conf.logging_context_format_string else: fmt = self.conf.logging_default_format_string
which protected against a KeyError in the case of request_id. It seems like something similar needs to be added for global_request_id.
If we apply the same logic for `global_request_id` then we will open the door of an implicit remplacement of the used format. By example if `request_id` is given and not `global_request_id` then we will override the context format with the default format implicitly.
I'd rather suggest to init `global_request` with a default value (`None`) as we did previously with other context var format => https://opendev.org/openstack/oslo.log/commit/c50d3d633ba
Thoughts?
-melanie
[2]
https://github.com/openstack/oslo.log/blob/ebdee7f39920ad5b4268ee296952432b0...
[1] https://review.opendev.org/c/openstack/tempest/+/841310
Stephen
[1] https://review.opendev.org/c/openstack/oslo.log/+/838190
-- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud
-- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud
participants (5)
-
Ghanshyam Mann
-
Herve Beraud
-
Matthew Thode
-
melanie witt
-
Stephen Finucane