Spark on Kubernetes
Typical data processing and machine learning workloads includes heavy setups like Hadoop stack, Kafka, NoSQL databases, Application APIs and so on. Traditionally, these workloads run on top of dedicated setups which adds overhead to IT teams as well as developers in managing multiple clusters. It is a need of the hour to develop unified solution to manage all the workloads on single control plane. With the help of containerization and Kubernetes we can achieve that easily.
Apache Spark is an essential tool for data engineers and data scientists, offering a robust platform for a variety of applications ranging from large scale data transformation to analytics to machine learning. We are already convinced and adopting containers to improve our workflows by realizing benefits such as packaging of dependencies and creating reproducible artifacts
it makes total sense to run Spark with our rest of the solution already running on top of Kubernetes. Thanks to the Apache Spark and Kubernetes contributors who have put lot of efforts for bringing Apache Spark 2.3 with native Kubernetes support.
Why it’s a big deal?
- Unified platform for entire complicated data pipelines: simplifies cluster management.
- Better utilization of resources.
- All the good things of Kubernetes are readily available to be used with Spark such as Pluggable Authorization and Logging.
- Better allocation of resources across multiple spark application because of great feature called “Namespace” in Kubernetes and resource quotas.
- Future opportunities: managing streaming workloads and leveraging service meshes like Istio.
- How Spark works on Kubernetes: Architecture and internal working.
- Why it’s important for present day applications.
- Use cases.
- Spark future roadmap for Kubernetes.
- Quick demo.
Basic knowledge of Kubernetes and Spark
Shrashti is a Google Cloud certified Data Engineer currently associated with Publicis Sapient. She has worked on multiple engagements with clients from Automobile as well Telecom domain. In current role, she is working on Hyper-personalized recommendation system for Automobile industry focused on Machine Learning in which she is responsible for handling Realtime data processing as well as batch data processing pipelines and extensively worked on Kubernetes for managing overall infrastructure.