Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Microbursts when configured with a higher than necessary number off connections #89

Open
jchaiy opened this issue Jan 28, 2020 · 5 comments

Comments

@jchaiy
Copy link

jchaiy commented Jan 28, 2020

Hi,
I currently have a setup where the application being tested has a ring buffer for the incoming http events and threads pull these events to process when they become available. We are monitoring the amount of time the requests wait in this queue and noticed that the time a request spends sitting in the queue increases dramatically when wrk2 is configured with more connections. This happens consistently with no other changing factors and is non existent in our production environment.

My theory is that wrk2 is generating microbursts when it has lots of connections available and then introduces delays to ensure that the configured rate is upheld. This causes requests to sit in the queue longer than in our production environment.

When we setup the application to have an artificial static response time and calculating the minimum number off connections needed for wrk2 to sustain the required rate, we see that the issue goes away. Spreading out the load across multiple wrk2 instances also seems to alleviate the issue. Both experiments seems to support my theory.

Im curious if anyone else has experienced something similar before or knows of a way to avoid the bursts.

@Kadle11
Copy link

Kadle11 commented Mar 3, 2021

@jchaiy I have a question regarding your setup just to help me understand how to properly use wrk2.

When we setup the application to have an artificial static response time and calculating the minimum number off connections needed for wrk2 to sustain the required rate, we see that the issue goes away.

How did you calculate the minimum number of connections required to sustain the required rate?

@jchaiy
Copy link
Author

jchaiy commented Mar 16, 2021

@Kadle11 So assuming you have a constant response time of, for example, 100ms and zero latency then a single connection would be able to handle a rate of 1s/100ms =10 requests a second. Thus if you want a rate of 100 request a second you would need 10 connections. This is why the behavior described above is odd since increasing the number of connections above the point that you can sustain the response time + round trip latency should have no effect on the down stream component if the wrk2 is sending the requests uniformly.

@Kadle11
Copy link

Kadle11 commented Mar 23, 2021

@jchaiy Thank you for the clarification! :)

@Kadle11
Copy link

Kadle11 commented Apr 29, 2021

@jchaiy I came across this PR, Is this something that may fix the issue you were facing?
#100

@jchaiy
Copy link
Author

jchaiy commented Apr 29, 2021

@Kadle11 I believe it might! Thanks for the pointer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants