How to build blazingly fast distributed computing like Apache Spark In-house?
We at ClustrData are building extremely large scale, extremely cost sensitive analytics solutions for our end user. Being cost sensitive is of utmost importance to us and ease to user is the ultimate goal. We cater to customers who are extremely cost sensitive. Which means whatever we build needs to be super-efficient in terms cost, efficiency and performance. Keeping our design philosophy and cost sensitivity in mind we began exploring various processing frameworks to take care of our processing needs keepingcost in mind. We realised that current state of art technologies like Spark do solve our processing requirements but do not support our cost limitations. In short, if we have
X amount of data and I need to run Spark Compute Cluster with 10 instances continuously for 6 hrs to complete on compute, what if my cost limitations only allow me to run 5Instances for 3 hrs. How can we build a framework which can do that? We need to think out of the box to build something like that. And we would like to share the story of our journey and learnings. And yes, it is possible to do so.
In this talk we will take care of below questions and explain the same followed by a demo is the system build.
What is business motivation to build Spark like(or better) distributed processing framework in-house?
Why distributed frameworks like Spark will not work for us in long run and why we need something else?
What are basic design layers, data structures and algorithms required to build one such a system?
What are the benchmark results and how it works better than Spark for us?
Demo run of the framework.
Upendra Singh: Full Stack Data Scientist, 11 years of experience in distributed algorithm development, distributed computing and ML
Lallit Parsai: Data Engineer-2, 6 years experience in Data Engineering and Distributed Systems.