Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KATT for load testing?! #52

Open
andreineculau opened this issue Nov 4, 2016 · 7 comments
Open

KATT for load testing?! #52

andreineculau opened this issue Nov 4, 2016 · 7 comments
Assignees

Comments

@andreineculau
Copy link
Member

after a failed rollercoaster in finding a load-testing tool (ab, httperf, siege, tsung, loader.io, loadimpact, ... , some others still to be investigated), I was thinking "how hard can it be" (tm) to turn KATT into a "simple" load-testing mode

  1. new params
  • number of total runs
  • number of workers (runs in parallel)
  • load-test timeout
  1. run 1 scenario, confirm that it passes
  2. start the load-test
  3. output statistics; for each transaction latency (min, max, average, std dev)

pinging @sstrigler @dmitriid @isakb (either to laugh at me, or to bring in some constructive criticism :) )

@sstrigler
Copy link

😻

@sstrigler
Copy link

Maybe in some later iteration of this: how about adding some weight and classes (tags) to certain operations. More weight, so more likely they are to be called. Class/tags can be used to group metrics. Both could be done by annotations (comments) in the blueprint.

@sstrigler
Copy link

Metrics could be collected by exometer or such, so that you can send them directly to graphana and Co.

@isakb
Copy link
Contributor

isakb commented Nov 4, 2016

+1

Sent from my phone.

On 4 Nov 2016 12:26 p.m., "Andrei Neculau" [email protected] wrote:

after a failed rollercoaster in finding a load-testing tool (ab, httperf,
siege, tsung, loader.io, loadimpact, ... , some others still to be
investigated), I was thinking "how hard can it be" (tm) to turn KATT into a
"simple" load-testing mode

  1. new params
  • number of total runs
  • number of workers (runs in parallel)
  • load-test timeout
  1. run 1 scenario, confirm that it passes
  2. start the load-test
  3. output statistics; for each transaction latency (min, max, average,
    std dev)

pinging @sstrigler https://github.com/sstrigler @dmitriid
https://github.com/dmitriid @isakb https://github.com/isakb (either
to laugh at me, or to bring in some constructive criticism :) )


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#52, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAYscQRMkzXkeDPEdx_0hE1ARHp5Zep4ks5q6xZMgaJpZM4Kpb5X
.

@andreineculau
Copy link
Member Author

started work in the metrics branch https://github.com/for-GET/katt/compare/metrics

  • report start, end, latency per transaction
  • concurrent workers
  • worker constraints

NOTE: KISS for now (ever?). Briefly looked at hackney's metrics (which interface folsom, exometer, grapherl), mxbench, tsung. Timewise, I cannot afford a more in-depth analysis and I'd need that in order to go down that path.

@andreineculau
Copy link
Member Author

@sstrigler I don't think weights is smth possible since KATT runs scenarios, not standalone requests. So the goal that I have is to take 1 scenario, run it n times, in m parallel workers. Am I missing smth?

Tagging requests is of course possible on the other hand. Even without a designed mechanism for that, one can still do it via a custom HTTP request header, given that the transport overhead is negligible X-My-Custom-Tags: tag1, tag2, tag3. But maybe in the context of metrics, tagging is more useful, though as I'm doing it now, statistics is up to the consumer to produce based on the metrics.

@sstrigler
Copy link

Ok, yeah, I see now that was stupid in the context of individual requests. But one could weight individual scenarios. Like tsung does. A test run runs different scenarios with a given percentage. They have to sum up to 100% of course.

6 nov. 2016 kl. 21:52 skrev Andrei Neculau [email protected]:

@sstrigler I don't think weights is smth possible since KATT runs scenarios, not standalone requests. So the goal that I have is to take 1 scenario, run it n times, in m parallel workers. Am I missing smth?

Tagging requests is of course possible on the other hand. Even without a designed mechanism for that, one can still do it via a custom HTTP request header, given that the transport overhead is negligible X-My-Custom-Tags: tag1, tag2, tag3. But maybe in the context of metrics, tagging is more useful, though as I'm doing it now, statistics is up to the consumer to produce based on the metrics.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@andreineculau andreineculau self-assigned this Jan 5, 2017
@andreineculau andreineculau modified the milestone: 1.7.0 Jan 7, 2017
@andreineculau andreineculau removed this from the 1.7.0 milestone May 19, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants