GODRIVER-2898 Standardized performance testing #1829
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
GODRIVER-2898
Summary
Migrate the existing benchmark pattern from the harness model to the benchmark facility provided in the Go testing package. There are multiple benefits to this:
Measuring performance requires us to run a benchmark and then collect the results and store them as test objects, which are incidentally defined by the poplar package. In order to run all benchmarking tests, collect the results, and write a perf.json file, we need to use the testing.Benchmark function:
Unfortunately, since there is no way for us to pass a *testing.T type to the benchmark, assertions made in the benchmark will not be thrown when running this test. To catch these, we fail the test if no iterations are run:
Additionally, to debug locally developers should run the benchmark directly. To avoid the test from failing on local development when there is an issue, we only fail when the failOnErr flag is used:
The ultimate goal of performance testing is to report significant “change points” to the team. This is accomplished by calculating the E-coefficient of inhomogeneity (h score) between perf.json sent between commits. The coefficient is calculated using the equation in the linked wiki article. This is notably a distance function, so the larger the H Score the more inhomogeneous the data. If the H Score is 0, then that means the distributions are the same. This value is always between [0, 1]. The server defaults to an H Score of 0.6 per PERF-3904 as this value is determined unlikely to yield false positives while capturing the maximum number of inhomogeneous cases.
Background & Motivation
Drivers should ensure any performance testing, including but not limited to the driver spec performance benchmarks: