Common Availability/Scalability Mistakes and how to avoid them
To get an overview of what are the impediments for a true application availability/uptime and how to fix it
Its very common to get a false sense of security that you have got the correct infra/architecture. When the crunch comes in terms of viral traffic or if an ad-campaigh brings good traffic, no amount of additional hardware/resources can help tide over this. In this session, Attendees would be able to understand what are the common errors and issues people overlook in designing their hardware deployments, software stacks, RTB objectives and basically anything that stops your site from having a 100% application/service availability.
Mistakes include incorrect sizing decisions, complicated data flows (yeah..shared storage included) which end up choking the network, incorrect capacity planning, non-redundant and SPOF deployments, non-evaluation of recovery point objectives resulting in sites being down for hours. The attendees of this session would be able to get the best practises of how to avoid these common issues and plan for an effective deployment taking care of concurrency, caching, infinite connections :)
Am a firm believer that the most simple and efficient solutions come from common sense and opensource. They complement each other. Have helped sites grow from a single one to hundreds of boxes. Am also a Co-Founder and CTO, E2ENetworks.