Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP caching #166

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

WIP caching #166

wants to merge 2 commits into from

Conversation

rgarcia
Copy link
Member

@rgarcia rgarcia commented Dec 6, 2017

  • adds a x-caching extension that you can configure endpoints with. Only supports adding max-age headers on responses

  • adds caching support go clients via https://github.com/gregjones/httpcache

not done:

  • gregjones/httpcache has an in memory cache we could use, but it doesn't limit the size of the cache in any way. wag should probably wrap this to have a limit

  • wag should report a counter of cache hits/misses

  • caching in the js client 😨

@rgarcia
Copy link
Member Author

rgarcia commented Dec 19, 2017

@kvigen @samfishman this morning @Sayan- and I added cache hit/miss logging to this.

I think we can use https://github.com/die-net/lrucache as the cache implementation, since this will give us byte-size limiting and also some nice-to-have LRU eviction logic.

I think the next steps from here:

  • (optional, if @samfishman wants to have a go at the JS client, add JS client caching)
  • Get this reviewed/merged to roll a new wag release. None of the new code gets triggered unless a client calls SetCache(...) or a server adds x-caching, so this step should be safe.
  • Change a client/server pair to use this, most likely hall-monitor + app-service

Copy link
Contributor

@kvigen kvigen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good!

have we vetted https://github.com/die-net/lrucache carefully? it doesn't have a ton of github stars and lru cache performance can be tricky. an alternative might be: https://github.com/karlseguin/ccache, but i don't know how difficult that will be to integrate with httpcache

also, thinking more about app-service + HM, i'm realizing that we have a challenge that when someone makes a change in HM they expect to see the change show up immediately. Not sure if there this is going to cause problems with the endpoints we're thinking about, or whether we should think about that case too

@@ -1,5 +1,3 @@
// Code generated by go-swagger; DO NOT EDIT.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i assume this is from a go-swagger update?

require.NoError(t, err)
}
require.Equal(t, controller.getCachedCount, 2, "expected a second request to go through")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: extra line

type cacheHitCounter struct {
httpcache.Cache
hits int64
misses int64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably not worth it yet, but i could see us wanting to know this on a per-endpoint basis

@@ -191,6 +191,20 @@ COPY bin/my-wag-service /usr/bin/my-wag-service
that exposes `map`, `forEach`, and `toArray` functions to iterate over the
results, again requesting new pages as needed.

### Caching
* Wag can add `max-age` cache-control headers on respones if you use the `x-caching`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: mention that max-age is in seconds

operationId: getBookByIDCached
description: Retrieve a book
x-caching:
max-age: 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to make this 1 so the tests run more quickly, or does that make the tests unstable?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants