-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SMP] Establish experiment naming prefix convention for Quality Gates #30273
Conversation
[Fast Unit Tests Report] On pipeline 46927687 (CI Visibility). The following jobs did not run any unit tests: Jobs:
If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help |
…o dual-ship data during a transition to new naming scheme
a14e471
to
51f22c1
Compare
# Agent 'all features enabled' idle experiment. Represents an agent install with | ||
# all sub-agents enabled in configuration and no active workload. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about adding a blurb about quality gates and a link to our docs in these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I plan to add this once the corresponding confluence docs exist
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👀 🙈
Regression DetectorRegression Detector ResultsRun ID: 75a9b266-100c-4218-b789-09684ecc77ae Metrics dashboard Target profiles Baseline: 16c16b2 Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_idle_all_features | memory utilization | +1.77 | [+1.64, +1.89] | 1 | Logs bounds checks dashboard |
➖ | pycheck_lots_of_tags | % cpu utilization | +0.67 | [-1.84, +3.18] | 1 | Logs |
➖ | file_tree | memory utilization | +0.52 | [+0.37, +0.68] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.38 | [-0.32, +1.09] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | +0.12 | [+0.07, +0.16] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.08 | [-0.10, +0.26] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.03 | [-0.31, +0.36] | 1 | Logs |
➖ | basic_py_check | % cpu utilization | +0.01 | [-2.71, +2.74] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.01 | [-0.21, +0.23] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.10, +0.11] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | -0.12 | [-0.18, -0.07] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.19 | [-0.43, +0.06] | 1 | Logs |
➖ | idle_all_features | memory utilization | -0.20 | [-0.31, -0.08] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.32 | [-0.81, +0.16] | 1 | Logs |
➖ | idle | memory utilization | -0.38 | [-0.42, -0.33] | 1 | Logs bounds checks dashboard |
➖ | otel_to_otel_logs | ingress throughput | -1.20 | [-2.00, -0.40] | 1 | Logs |
Bounds Checks
perf | experiment | bounds_check_name | replicates_passed |
---|---|---|---|
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 |
✅ | idle | memory_usage | 10/10 |
✅ | idle_all_features | memory_usage | 10/10 |
✅ | quality_gate_idle | memory_usage | 10/10 |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
/merge |
🚂 MergeQueue: pull request added to the queue The median merge time in Use |
What does this PR do?
Duplicates the two existing quality-gate experiments and prefixes the experiment name with
quality_gate_
Motivation
Some SMP experiments are designed to be the most representative of overall Datadog Agent behavior, these will have strict upper bounds and will have been audited in more depth than others.
These "Quality Gate" experiments will be identified by this
quality_gate_
prefix.Describe how to test/QA your changes
n/a
Possible Drawbacks / Trade-offs
Additional Notes