Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Print useful statistics of the response time in the final summary when using the --repeat option #2973

Open
1 task
sankalp-khare opened this issue Jun 28, 2024 · 5 comments
Labels
enhancement New feature or request
Milestone

Comments

@sankalp-khare
Copy link

sankalp-khare commented Jun 28, 2024

Problem to solve

If I use something like --repeat 10000 I get the response time of each request in its line, but what's more useful to me as a user would be aggregate information like the mean, median, p95 etc. of the response times

Proposal

In addition to "Duration:" which prints the total runtime of the hurl session, add some more details to the summary that help the user understand the request latency trend observed during their session. I imagine something like

Executed files:    50
Executed requests: 50 (5.8/s)
Succeeded files:   50 (100.0%)
Failed files:      0 (0.0%)
Duration:          8559 ms
Latency (average): 976 ms
Latency (median):  701 ms
Latency (p95):     1400 ms

Additional context and resources

This felt like something I would want when I first used hurl today to test an endpoint with the --repeat option.

Tasks to complete

  • ...
@sankalp-khare sankalp-khare added the enhancement New feature or request label Jun 28, 2024
@jcamiel
Copy link
Collaborator

jcamiel commented Jun 28, 2024

Hi @sankalp-khare

Indeed these kind of informations could be useful. We need to find the right balance between too fee and too many informations.

By "latency" you mean "request duration" (time between the start of the response and the end of the response transfer) right?

We can bikeshed some outputs(taking k6 as an inspiration):

Executed files:    50
Executed requests: 50 (5.8/s)
Request durations: 976 ms (average) 701 ms (median)  1400 ms (p95)
Succeeded files:   50 (100.0%)
Failed files:      0 (0.0%)
Total duration:    8559 ms
k6-results-stdout

@sankalp-khare
Copy link
Author

Yes I mean request duration. Putting all the aggregate stats in a single line looks good to me!

@sankalp-khare
Copy link
Author

It will also be useful to print a distribution of received response codes --- for each code seen in the responses, we want to show how many requests received that response code (count), what percent of the total responses were that response code.

@jcamiel
Copy link
Collaborator

jcamiel commented Jul 6, 2024

We need to find a balance for the test summary between too much informations and too few. To perform solid stats on tests, you can use --json option. This structured view of a test should be sufficient to export indicators.

@jcamiel jcamiel added this to the 5.1.0 milestone Sep 5, 2024
@jcamiel
Copy link
Collaborator

jcamiel commented Sep 5, 2024

Executed files:    50
Executed requests: 50 (5.8/s)
Request
  duration: 4000 ms
  average:  976 ms
  median:   701 ms
  min:      976 ms
  max:      701 ms
  p90:      100 ms
  p95:      100 ms
Succeeded files:   50 (100.0%)
Failed files:      0 (0.0%)
Total duration:    8559 ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants