Skip to content
This repository has been archived by the owner on Apr 15, 2019. It is now read-only.

Caching of characters #159

Open
yashha opened this issue Mar 26, 2016 · 20 comments
Open

Caching of characters #159

yashha opened this issue Mar 26, 2016 · 20 comments
Assignees
Milestone

Comments

@yashha
Copy link
Collaborator

yashha commented Mar 26, 2016

@mammuth
server-component can easily cache the characters pretty long, since they don't change that often.

Sounds like a good thing to do, it would reduce the loading time by a lot and additionally shinks the bytes getting loaded from the api from 700kb to 1kb

@sacdallago
Copy link
Contributor

MMMHHHHHHHH you can implement a cache, yes. But set a reasonable TTL. Maybe configurable ;)

@sacdallago
Copy link
Contributor

P.S.: have you considered using Redis for this? :)

@mammuth
Copy link
Collaborator

mammuth commented Mar 27, 2016

We have three layers we can cache:

With those three we could probably handle multiple thousand requests per second.
So no, we didn't consider adding yet another layer just for the purpose of adding more complexity. 😉
(imho)

@mammuth
Copy link
Collaborator

mammuth commented Mar 27, 2016

@kordianbruck said they will run the scrappers probably once a day.
Actually someone could say, the whole app is more or less 'static'.

So there is no need to hit the database on every request. Actually their isn't a reason for hitting node / react on every request either, I think.

We could go and add a varnish in front of the most things. Even the TTL can be pretty high probably.

My experience with caching is pretty limited, so if anyone wants to correct me, please go for it 😘

@yashha
Copy link
Collaborator Author

yashha commented Mar 27, 2016

We could just make an one day save the characters to an local json file and make a custom route for it.
http://merencia.com/node-cron/
We could even make it variable.

@kordianbruck
Copy link
Collaborator

What are we caching here? The HTTP requests to the API? The Wiki Pages? ???

You can hit the API any time, the API should do caching for any incoming requests if necessary.

@sacdallago
Copy link
Contributor

I also don't entirely get your HTTP caching? I hope you don't mean caching entire websites/routes because with JS I have no clue where to start with that and absolutely no time :D

@yashha
Copy link
Collaborator Author

yashha commented Mar 30, 2016

HTTP caching wont change much for the loading of the characters, the user still have to download the whole 700kb.
We need something like https://www.npmjs.com/package/memcached
And/or we implement the sorting and filtering on the server.

@kordianbruck
Copy link
Collaborator

@yashha filtering is already possible using the find characters endpoint...

@yashha
Copy link
Collaborator Author

yashha commented Mar 30, 2016

@kordianbruck That doesn't help

@kordianbruck
Copy link
Collaborator

@yashha care to explain why not or should I take wild guesses?

@yashha
Copy link
Collaborator Author

yashha commented Mar 30, 2016

I thing you commented this because I said that we want to filter on the server. It is not really related to the caching problem because the user has still to download the 700kb of characters.

@kordianbruck
Copy link
Collaborator

If you filter on the server, using the find api endpoint, you will only get the characters that match the search query. We can also setup a 'lite' version of the characters, that would only contain the name for example. There are many possibilities!

@yashha
Copy link
Collaborator Author

yashha commented Mar 30, 2016

Ah now I understand :)
Can you give us some example posts?

@kordianbruck
Copy link
Collaborator

They are in the docs! https://api.got.show/doc/#api-Characters-FindCharacters

@yashha
Copy link
Collaborator Author

yashha commented Mar 30, 2016

I saw this already. It provides too less functionality to bring the size down for user. Espescially on the list page.
We need filters for offset, limits and sorting to get the filesize down. So a good idea for us would be to set somthing up on our server side.

@sacdallago
Copy link
Contributor

@yashha that is a good idea. The problem is not only the API. It's: Instantiating the connection, wait for the DB connection and then get the answer... This should anyway be faster on the real thing, as the API and webservice are hosted on the same machine, but the DB remains an issue (for now).
@kordianbruck 's suggestion to return only _id + name is also good. You could eventually get this data immediately when the page opens, write it to localStorage in a cache with a TTL.. This, I can guarantee, will save you so much time.

@kordianbruck
Copy link
Collaborator

@sacdallago what are you talking about? The Connection to the DB is started, when you start the NodeJS Server. The only problem here, is that the team needs to asynchronously load the characters bit by bit instead of downloading a json of 1/2 Mb.
I can implement the "light" version, Team F just needs to tell me what they prefer.

@gyachdav
Copy link
Collaborator

gyachdav commented Apr 3, 2016

is this still happening? @kordianbruck solution seems reasonable. I never understood why we need to load so much data from the APO only to show 20 names

@yashha
Copy link
Collaborator Author

yashha commented Apr 3, 2016

I think we have bigger problems to fix and enhancements to do.

@mammuth mammuth modified the milestone: Website Release Apr 10, 2016
@mammuth mammuth modified the milestones: Version 1.1, Website Release Apr 10, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Development

No branches or pull requests

5 participants