Layers of Cache
Data travel over fiber ~200,000 Kms
So the initial SYN for TCP handshake would take 68ms and then the client requests for an object following which it will take another 68ms which will amount to 136ms for the first few bytes of the object to start arriving on client(the browser). Depending on the size of the object there may be many round-trips involved to get the rest of the object down on the browser.
Goal I : Is to reduce the total round-trip time of 136ms to be reduced as much as possible.
This is fastest that we can deliver object to the customers because the object is cached on hard disk of the device they are using. The browser can get it in microseconds because the request doesnt need to go to the network at all. This is done by setting the max-age header or expire header on the objects and modern browsers will honour these headers and cache them when possible. For HTML5 application, browser that support HTML5 we can also use the manifest file – a text file where you specify which of the pages can be used for offline access. Simply include the manifest attribute in the HTML tag of your document that you want to be cached.
This sounds great as Network Latency is eliminated but the catch is browser cache is limited and Application developers has no control in changing the size of that cache. Probably there exist a competition of getting a page being cached as the end user uses the same browser to visit other apps/webpages so the chances of getting a page cached could be low.
So to get an object it has to come all the way from Newyork city which will take the complete round-trip time of 68ms. If we bring a server in between to reduce the round-trip thats the Caching/Edge server sitting much closer to Seattle the time would get reduced say 10ms. Now the client can get an object in 20ms. CloudFront maintains a persistent connection between Edge Node and Origin Server so the 68ms SYN traffic is avoided. Even if the object is not cached in CF then it will directly download the object on EdgeNode without TCP handshake then Client downloads from EdgeNode, a total of 88ms < 136ms . [CF has 46 edge locations around the globe]
Customized Content can also be cached maybe for a shorter period of time not an entire but few minutes/seconds given. This will help scaling the delivery of this object without going back to the origin server. You could typically customize content on the site using either cookies or query string parameters and CF can be configured to forward both cookies and queries back to the Origin Server and then those values can be used at the Edge Servers to customize the cache version of the page.
We have to pay attention to the metrics of caching tools thats being used (hit ratio, miss ratio, eviction rates). If cache is too small for the content causing very high evictions – we have to increased the size of the instances or increase the number of instances in pool. Pay attention to the TTL applied to cache objects – Too many evictions or Too many misses mean objects maybe cache for too long (big TTL) , so on a new code deploy customers may not be seeing the most recent data.
What is Eviction? : Removes items from the cache to free memory for new items.
High misses and high evictions can usually be correlated to mean that the application is expecting data to be there when it’s not, and that would be a problem. If the application is receiving the data that it’s asking for, your cache is working correctly.
Mysql 5.6 now has support for Memcached, it is exciting because the two has been paired up traditionally for probably around 10years now and so what is done with 5.6 is that we can run memcached inside of Mysql. The in-memory space is backed up by the innodb engine on the host. So we get the best of both worlds – having high performing key-value store in-memory and you get really strong persistence that you get from innodb.
Reliability ? – Caching introduces a level of redundancy at various tiers so if we have an issue with one of those tiers , the tier in front of it can still serve. So we have any anomalies at the backend , caching can save you from end user impacting events. The further back into my infrastructure the more its going to cost me : a DB query > (costlier) App query > Webproxy query > CDN query!
TorontoStar – Although CloudFront cache based on cookies, it was not used to maximize caching capabilities. We accept query strings when its absolutely necessary example of that is the search page, we cache each individual query strings separately. Above is the 19 different rules with different TTL and different parameters we accept for different assets class. We can control how we cache based on time – so we can set different expiry rules based on the content , how old it is. Say, an article from 2007 can be cached longer than an article from 2013. Memcached is preferred choice for storing sessions.
To Know more read the link: