The Fifth Elephant 2014

A conference on big data and analytics

In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.

Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.

Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.

Tickets: http://fifthel.doattend.com

Website: https://fifthelephant.in/2014

For queries about proposals / submissions, write to info@hasgeek.com

Theme

  1. Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.

  2. Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).

  3. Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.

  4. Real-time analytics

  5. Opportunity analytics

  6. Big data and security

  7. Big data and internet of things

  8. Data Usage and BI (Business Intelligence) in different sectors.

Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Sumit Kumar

@sumitk

Supercharge Application I/O Performance with SSD caching

Submitted Jun 15, 2014

Storage I/O Performance plays a significant role in determining overall application end user response times and perceived user latency. How can you leverage solid state drives (SSD) to boost OLTP application I/O performance (for e.g., MySQL, MongoDB) in a holistic, non-disruptive and cost effective manner, without throwing away your hard disk but utilizing it for capacity? Through this talk, I’ll share my insights on application I/O performance boost by deploying SSD as a caching device along with CacheBox’s server-side SSD caching software – CacheAdvance. Its unique, next-generation ‘Application Acceleration’ technology is optimized to give much higher performance gains than any generic block or file caching solution available today.

Outline

Typically, best application performance and end user response times are achieved when the application’s entire active working data set fits within server’s available main memory (RAM) and is readily accessible when the application demands it. The ability to satisfy most of application I/O needs by having more of application’s working set in RAM is one of the top performance tuning considerations for system administrators. However, achieving this goal is a real challenge with the rise in the number of applications coupled with Big Data and widely varying working set sizes as multiple applications are consolidated on the same server. This drives up the total amount of RAM needed to cache application working set.

Most enterprises today deploy backup and replication solution as part of their data protection storage functions. Besides accessing vast amounts of data, backup and replication can pollute and inflate application’s working set size requirements resulting in thrashing out of application’s working data from main memory. Subsequent application I/O accesses incur heavy performance penalty as the data is fetched from secondary storage. If the application data access pattern happens to be random I/O, such as many OLTP workloads, then even a small fraction of data miss in main memory could result in severe application response time latency issues.

Use of separate disks and spindles for various applications or application components is an alternative way to reduce the latency impact of data fetches from the hard disk. Another commonly used option is to create a RAID by bundling multiple disks together. But these options add extra management overhead and result in wasted storage capacity due to overprovisioning. Also, keeping application components on separate disk spindles does not add much value for OLTP workloads with high percentage of random I/O access patterns.

Another approach to addressing Application I/O performance is the use of faster all-flash storage instead of much slower spinning hard disk drives. This is disruptive and expensive as it requires all data to be migrated to all-flash storage. A cost effective alternative is to use a small capacity of flash as a cache device. This is much less expensive than growing the server main memory or an all-flash storage option. A case in point here is MongoDB ETL jobs where most of the clusters are made up of commodity hardware. It may not be acceptable cost-wise or feasible capacity-wise to hold entire Mongo Data store within SSD storage attached to each node.

Generic Server-side SSD caching solutions available today address I/O performance cost effectively but are limited in scope as their focus is limited to storage at the server or VM level. They are not geared for application level I/O optimizations. CacheBox’s CacheAdvance stands out among the server-side SSD caching solutions as it uses Application Specific Acceleration Technology and provides much higher performance boost with less amount of SSD capacity used as cache.

Unlike other solutions, CacheAdvance technology impact spans all components of a data center and not just storage. At the lowest component level, CacheAdvance can harness SSDs from any vendor effectively and optimize its caching algorithms to bring out the best SSD performance. At the storage level, it gives the benefit of flash performance without migrating entire data to flash storage. At the server level, its fine grained approach helps to precisely accelerate only the business critical applications or only selected VMs on a server, or only those application I/O that need to be accelerated, thus reducing the need for additional servers to scale up the performance of applications running on that server. At the application level, CacheAdvance provides per application add on modules that are tuned for application specific I/O signatures. Beyond block level caching, this brings in advantages of predictive caching and flexibility to choose some or all components at the application level for a more precise and efficient I/O performance acceleration. For example, CacheAdvance MySQL Application Acceleration Module (AAM) can detect all the MySQL components such as databases, tables, ibdata, log files and allow acceleration of one or more of these components. At the network level, server-side SSD caching can significantly reduce your network traffic (SAN, NAS), bringing in additional gains for overall datacenter performance boost. Thus, CacheAdvance provides a holistic datacenter level application performance boost by utilizing server side SSD resource extremely efficiently.

Speaker bio

I work at Cachebox India Pvt. Limited as Principal software Engineer and have deep expertise in system storage software, SSD caching and tuning MySQL and MongoDB applications performance in enterprise deployments. I have played a key role in the design and development of patent pending CacheAdvance software (Linux). My prior experience includes 6 years at Symantec where I worked on NAS appliance product (FileStore).

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more