Supercharge Application I/O Performance with SSD caching
Submitted by Sumit Kumar (@sumitk) on Sunday, 15 June 2014
Section: Full talk Technical level: Intermediate
Storage I/O Performance plays a significant role in determining overall application end user response times and perceived user latency. How can you leverage solid state drives (SSD) to boost OLTP application I/O performance (for e.g., MySQL, MongoDB) in a holistic, non-disruptive and cost effective manner, without throwing away your hard disk but utilizing it for capacity? Through this talk, I’ll share my insights on application I/O performance boost by deploying SSD as a caching device along with CacheBox’s server-side SSD caching software – CacheAdvance. Its unique, next-generation ‘Application Acceleration’ technology is optimized to give much higher performance gains than any generic block or file caching solution available today.
Typically, best application performance and end user response times are achieved when the application’s entire active working data set fits within server’s available main memory (RAM) and is readily accessible when the application demands it. The ability to satisfy most of application I/O needs by having more of application's working set in RAM is one of the top performance tuning considerations for system administrators. However, achieving this goal is a real challenge with the rise in the number of applications coupled with Big Data and widely varying working set sizes as multiple applications are consolidated on the same server. This drives up the total amount of RAM needed to cache application working set.
Most enterprises today deploy backup and replication solution as part of their data protection storage functions. Besides accessing vast amounts of data, backup and replication can pollute and inflate application’s working set size requirements resulting in thrashing out of application’s working data from main memory. Subsequent application I/O accesses incur heavy performance penalty as the data is fetched from secondary storage. If the application data access pattern happens to be random I/O, such as many OLTP workloads, then even a small fraction of data miss in main memory could result in severe application response time latency issues.
Use of separate disks and spindles for various applications or application components is an alternative way to reduce the latency impact of data fetches from the hard disk. Another commonly used option is to create a RAID by bundling multiple disks together. But these options add extra management overhead and result in wasted storage capacity due to overprovisioning. Also, keeping application components on separate disk spindles does not add much value for OLTP workloads with high percentage of random I/O access patterns.
Another approach to addressing Application I/O performance is the use of faster all-flash storage instead of much slower spinning hard disk drives. This is disruptive and expensive as it requires all data to be migrated to all-flash storage. A cost effective alternative is to use a small capacity of flash as a cache device. This is much less expensive than growing the server main memory or an all-flash storage option. A case in point here is MongoDB ETL jobs where most of the clusters are made up of commodity hardware. It may not be acceptable cost-wise or feasible capacity-wise to hold entire Mongo Data store within SSD storage attached to each node.
Generic Server-side SSD caching solutions available today address I/O performance cost effectively but are limited in scope as their focus is limited to storage at the server or VM level. They are not geared for application level I/O optimizations. CacheBox’s CacheAdvance stands out among the server-side SSD caching solutions as it uses Application Specific Acceleration Technology and provides much higher performance boost with less amount of SSD capacity used as cache.
Unlike other solutions, CacheAdvance technology impact spans all components of a data center and not just storage. At the lowest component level, CacheAdvance can harness SSDs from any vendor effectively and optimize its caching algorithms to bring out the best SSD performance. At the storage level, it gives the benefit of flash performance without migrating entire data to flash storage. At the server level, its fine grained approach helps to precisely accelerate only the business critical applications or only selected VMs on a server, or only those application I/O that need to be accelerated, thus reducing the need for additional servers to scale up the performance of applications running on that server. At the application level, CacheAdvance provides per application add on modules that are tuned for application specific I/O signatures. Beyond block level caching, this brings in advantages of predictive caching and flexibility to choose some or all components at the application level for a more precise and efficient I/O performance acceleration. For example, CacheAdvance MySQL Application Acceleration Module (AAM) can detect all the MySQL components such as databases, tables, ibdata, log files and allow acceleration of one or more of these components. At the network level, server-side SSD caching can significantly reduce your network traffic (SAN, NAS), bringing in additional gains for overall datacenter performance boost. Thus, CacheAdvance provides a holistic datacenter level application performance boost by utilizing server side SSD resource extremely efficiently.
I work at Cachebox India Pvt. Limited as Principal software Engineer and have deep expertise in system storage software, SSD caching and tuning MySQL and MongoDB applications performance in enterprise deployments. I have played a key role in the design and development of patent pending CacheAdvance software (Linux). My prior experience includes 6 years at Symantec where I worked on NAS appliance product (FileStore).