In the vast, multi-layered subject area of web performance, server response time is an important metric. From a CMS standpoint, however, it is one of the most significant. Best practise recommends a 200ms or lower time-to-first-byte. For medium to high-traffic sites, server load is another vital statistic. In this talk we’ll look at micro-caching as a performance strategy to handle loads of 10 to 100 concurrent requests per second while still maintaining sub-200ms TTFBs.
We’ll go over the caching holy grail – keeping caches warm, busting them when they are stale, and automating this cycle. We will also go over a bunch of nuances and edge cases that are necessary to deal with in a robust production-ready system.
This will be a subset of a talk I originally delivered at Dot All 2019 in Montreal. I will adapt the content to be CMS-independent and focus on implementing micro-caching using Nginx.
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}