-
Notifications
You must be signed in to change notification settings - Fork 8
Caching of characters #159
Comments
MMMHHHHHHHH you can implement a cache, yes. But set a reasonable TTL. Maybe configurable ;) |
P.S.: have you considered using Redis for this? :) |
We have three layers we can cache:
With those three we could probably handle multiple thousand requests per second. |
@kordianbruck said they will run the scrappers probably once a day. So there is no need to hit the database on every request. Actually their isn't a reason for hitting node / react on every request either, I think. We could go and add a varnish in front of the most things. Even the TTL can be pretty high probably. My experience with caching is pretty limited, so if anyone wants to correct me, please go for it 😘 |
We could just make an one day save the characters to an local json file and make a custom route for it. |
What are we caching here? The HTTP requests to the API? The Wiki Pages? ??? You can hit the API any time, the API should do caching for any incoming requests if necessary. |
I also don't entirely get your HTTP caching? I hope you don't mean caching entire websites/routes because with JS I have no clue where to start with that and absolutely no time :D |
HTTP caching wont change much for the loading of the characters, the user still have to download the whole 700kb. |
@yashha filtering is already possible using the find characters endpoint... |
@kordianbruck That doesn't help |
@yashha care to explain why not or should I take wild guesses? |
I thing you commented this because I said that we want to filter on the server. It is not really related to the caching problem because the user has still to download the 700kb of characters. |
If you filter on the server, using the find api endpoint, you will only get the characters that match the search query. We can also setup a 'lite' version of the characters, that would only contain the name for example. There are many possibilities! |
Ah now I understand :) |
They are in the docs! https://api.got.show/doc/#api-Characters-FindCharacters |
I saw this already. It provides too less functionality to bring the size down for user. Espescially on the list page. |
@yashha that is a good idea. The problem is not only the API. It's: Instantiating the connection, wait for the DB connection and then get the answer... This should anyway be faster on the real thing, as the API and webservice are hosted on the same machine, but the DB remains an issue (for now). |
@sacdallago what are you talking about? The Connection to the DB is started, when you start the NodeJS Server. The only problem here, is that the team needs to asynchronously load the characters bit by bit instead of downloading a json of 1/2 Mb. |
is this still happening? @kordianbruck solution seems reasonable. I never understood why we need to load so much data from the APO only to show 20 names |
I think we have bigger problems to fix and enhancements to do. |
Sounds like a good thing to do, it would reduce the loading time by a lot and additionally shinks the bytes getting loaded from the api from 700kb to 1kb
The text was updated successfully, but these errors were encountered: