-
Notifications
You must be signed in to change notification settings - Fork 7
Too aggressive cache control #37
Comments
Yes, I have configured Browser cache and Edge cache to 8 days, at the highest caching level. Let me know if you want to decrease (and how much), and, of course, we can have separate caching instructions per URL patterns. |
I guess it mostly comes down to how often someone will write something for the site. But if new information is added and people are linked to the site I don't think people will count on the fact that they have to refresh the page to see new information. Though for blog posts they will be linked directly to a new page so in that case it won't matter. What do you think about which settings to use? |
What we need is to proactively purge cached content every time we merge something into master. Can we do that instead? |
We already purge do that, the job on Travis purges the cache when something is committed to master. However that is the cache at Cloudflare. I'm talking about the local cache in the browser. The only way to purge that is to use a different url, such as is done when the css and javascript files change. Other than that the only way would be to set a lower expiration time. This is of course only a problem for returning visitors. |
Can we just disable the browser cache? I thought having to ctrl-shift-R was dead. |
Okay, browser cache disabled... |
Actually I think browser cache should be enabled on the /assets and /images folders. For /images it could probably be two hours just so that it loads faster if someone is browsing between pages. For the /assets folder it could be much longer since we have another way of purging those, it could be a week like it is now or even a month. |
Then we can add a simple rule to cache these paths for X days / weeks / months. Just let me know whatever you decide :-) |
I would say: 1 day - /images |
2 months? I am completely against telling the browser what to do with their cache but if we are going to to do it let's do it in a sane manner and not have more than 24 hours. 2 months for |
The caching is done by url and the code appends an id the files under the assets directory. I.e like this: <link rel="stylesheet" href="/assets/plugins/bootstrap/css/bootstrap.min.css?cache_id=201707131435">
<link rel="stylesheet" href="/assets/css/napalm.css?cache_id=201707131435">
<link rel="stylesheet" href="/assets/plugins/animate.css?cache_id=201707131435">
<link rel="stylesheet" href="/assets/plugins/line-icons/line-icons.css?cache_id=201707131435">
<link rel="stylesheet" href="/assets/plugins/font-awesome/css/font-awesome.min.css?cache_id=201707131435">
<link rel="stylesheet" href="/assets/plugins/owl-carousel/owl-carousel/owl.carousel.css?cache_id=201707131435">
<link rel="stylesheet" href="/assets/plugins/layer-slider/layerslider/css/layerslider.css?cache_id=201707131435"> Currently this comes from the https://github.com/napalm-automation/napalm-automation.github.io/blob/master/_data/cache.yml file, so it would require that the id is changed. This could be handled automatically in the future with jekyll-assets, cut currently GitHub Pages doesn't support that plugin. So even if the cache would be set to 2 months it would be easy to purge it if there was a need. From Google's PageSpeed Insights documentation https://developers.google.com/speed/docs/insights/LeverageBrowserCaching
I guess this comes down to taste or politics, but I only see it as a way of speeding up the user experience. |
Ok, makes sense. That means we can set the expiration time by URL on the CDN side as well, we don't have to limit it to the local browser cache. We could even have travis update that value if there is a change inside any of those folders. This should work:
|
Not quite sure what you mean by this? :)
Would this involve updating the cache.yml file and pushing to master / sending a PR, or are you thinking about something else? |
I meant this: " I have configured Browser cache and Edge cache to 8 days". If we can do versioning and purge content we can have higher times, we don't have to limit ourselves to the local cache of the browser.
Yeah, it could be a simple shell script that calculates a new timestamp and pushes the change automatically. No need to create a PR. We just have to make sure we don't start and endless loop of "push cache_id", "trigger CI", "push cache_id", "trigger CI"... |
So should we just raise the age time for /images and /assets again and leave the rest of the site as is? Then we can handle that shell script as another issue. |
LGTM |
The cacheing on the site is set quite high, it looks like all content is stored the browser for over a week. Don't know what would be optimal. Currently the pages needs a hard reload in order for users to see new content.
I think at least for the root https://napalm-automation.net and https://napalm-automation.net/news it could be set a lot lower. Users shouldn't need to use Command (or Control) + R to see new posts. For things under /assets it could be set a lot higher, especially for javascript and css as there's some kind of cache busting when needed for those (https://github.com/napalm-automation/napalm-automation.github.io/blob/master/_data/cache.yml)
What options do we have to set different expirations to different sections of the site?
The text was updated successfully, but these errors were encountered: