Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove global error handler in favor of internal logs #2175

Open
lalitb opened this issue Oct 4, 2024 · 3 comments · May be fixed by #2181
Open

Remove global error handler in favor of internal logs #2175

lalitb opened this issue Oct 4, 2024 · 3 comments · May be fixed by #2181
Assignees
Milestone

Comments

@lalitb
Copy link
Member

lalitb commented Oct 4, 2024

  • Internal logs support is now available (Improve internal opentelemetry logging #2128), and it would make sense to use it for internal errors.
  • This allows the removal of the global error handler.
  • Current downside: it won’t be possible to register a custom handler for errors at this time. However, plan is to add support for registering custom handlers for internal logs.
  • If "internal-logs" is enabled, tracing error macro would be used for handling error, else the eprintln would be used.
@mlowicki
Copy link

mlowicki commented Oct 4, 2024

Saw issue with using eprintln as the default error handler as it causing flood of writes to STDERR like OpenTelemetry trace error occurred. cannot send message to batch processor as the channel is full. Would it make sense to have something throttled by default or at least an visible option to enable in order to avoid hammering with tons of messages.

@lalitb
Copy link
Member Author

lalitb commented Oct 4, 2024

Saw issue with using eprintln as the default error handler as it causing flood of writes to STDERR like OpenTelemetry trace error occurred. cannot send message to batch processor as the channel is full. Would it make sense to have something throttled by default or at least an visible option to enable in order to avoid hammering with tons of messages.

That's a valid concern. I'm not entirely sure how we could achieve this directly, but will look it as potential improvement for future. For now, don't see this as a blocker, since the issue can be mitigated externally using a script to throttle the stderr output.

@cijothomas
Copy link
Member

For this specific one, I don't think there should be a log entry for channel full for each message. It is best exposed as an internal metric only. If a log is needed, it can be done at the shutdown time "N items were dropped due to buffer full"/similar.

@cijothomas cijothomas added this to the 0.27 milestone Oct 7, 2024
@lalitb lalitb linked a pull request Oct 8, 2024 that will close this issue
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants