Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: add caching using KV #163

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

karthik2804
Copy link
Contributor

without caching in kv

bombardier -c 1 -n 1000 http://localhost:3000/cloud/data-redis
Bombarding http://localhost:3000/cloud/data-redis with 1000 request(s) using 1 connection(s)
 1000 / 1000 [=========================================================] 100.00% 68/s 14s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec        69.01      24.34     104.92
  Latency       14.48ms     1.66ms    25.97ms
  HTTP codes:
    1xx - 0, 2xx - 1000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     2.20MB/s

with caching

bombardier -c 1 -n 1000 http://localhost:3000/cloud/data-postgres
Bombarding http://localhost:3000/cloud/data-postgres with 1000 request(s) using 1 connection(s)
 1000 / 1000 [=========================================================] 100.00% 998/s 1s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec      1168.36     367.53    1642.22
  Latency        0.86ms     5.59ms   177.52ms
  HTTP codes:
    1xx - 0, 2xx - 1000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:    34.10MB/s

@karthik2804 karthik2804 force-pushed the kv/caching_index branch 2 times, most recently from 220fa47 to 4c53cd8 Compare March 1, 2023 23:06
Copy link
Contributor

@itowlson itowlson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found this a bit hard to follow, but I think you are in a bind - there is no real loader layer where you can inject caching behaviour, so you need to inject caching into existing large functions, and it ends up bringing lots of tactical considerations around things like time parsing right up into the cache and application layers.

Ideally it would be good to start unpicking bits of the structure, and adding traits and helper methods (e.g. if cached.has_expired()) rather than operating directly on strings and serialisation types. But that would likely require a more significant refactor than you can fit in at this time. Something to think about for the longer term maybe...

Cargo.toml Outdated Show resolved Hide resolved
src/content.rs Outdated Show resolved Hide resolved
src/bartholomew.rs Outdated Show resolved Hide resolved
src/bartholomew.rs Outdated Show resolved Hide resolved
src/content.rs Show resolved Hide resolved
src/content.rs Outdated Show resolved Hide resolved
src/content.rs Outdated Show resolved Hide resolved
src/bartholomew.rs Outdated Show resolved Hide resolved
@karthik2804
Copy link
Contributor Author

@itowlson, I think I have addressed all the comments and I think this is ready for another round of review.

Signed-off-by: Karthik Ganeshram <[email protected]>
@mikkelhegn
Copy link
Member

Has this been tested in the cloud? (Not sure if we can), but I'm curious where the performance benefits come from? Moving from generating things as the request comes in, to read something pre-generated from the KV Store? Is that main thing?

@karthik2804
Copy link
Contributor Author

We can't test it on the cloud yet. You are correct in the source of performance improvements, by keeping a copy of rendered stuff and using it until the cache is invalidated, the entire render cycle is skipped.

Copy link
Contributor

@itowlson itowlson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants