The eighth edition of The Fifth Elephant will be held in Bangalore on 25 and 26 July. A thousand data scientists, ML engineers, data engineers and analysts will gather at the NIMHANS Convention Centre in Bangalore to discuss:
- Model management, including data cleaning, instrumentation and productionizing data science.
- Bad data and case studies of failure in building data products.
- Identifying and handling fraud + data security at scale
- Applications of data science in agriculture, media and marketing, supply chain, geo-location, SaaS and e-commerce.
- Feature engineering and ML platforms.
- What it takes to create data-driven cultures in organizations of different scales.
1. Meet Peter Wang, co-founder of Anaconda Inc, and learn about why data privacy is the first step towards robust data management; the journey of building Anaconda; and Anaconda in enterprise.
2. Talk to the Fulfillment and Supply Group (FSG) team from Flipkart, and learn about their work with platform engineering where ground truths are the source of data.
3. Attend tutorials on Deep Learning with RedisAI; TransmorgifyAI, Salesforce’s open source AutoML.
4. Discuss interesting problems to solve with data science in agriculture, SaaS perspective on multi-tenancy in Machine Learning (with the Freshworks team), bias in intent classification and recommendations.
5. Meet data science, data engineering and product teams from sponsoring companies to understand how they are handling data and leveraging intelligence from data to solve interesting problems.
Why you should attend?
- Network with peers and practitioners from the data ecosystem
- Share approaches to solving expensive problems such as cleanliness of training data, model management and versioning data
- Demo your ideas in the demo session
- Join Birds of Feather (BOF) sessions to have productive discussions on focussed topics. Or, start your own Birds of Feather (BOF) session.
Full schedule published here: https://hasgeek.com/fifthelephant/2019/schedule
For more information about The Fifth Elephant, sponsorships, or any other information call +91-7676332020 or email email@example.com
JSFoo:VueDay 2019 sponsors:
Spark on Kubernetes
Session type: Full talk of 40 mins
Typical data processing and machine learning workloads includes heavy setups like Hadoop stack, Kafka, NoSQL databases, Application APIs and so on. Traditionally, these workloads run on top of dedicated setups which adds overhead to IT teams as well as developers in managing multiple clusters. It is a need of the hour to develop unified solution to manage all the workloads on single control plane. With the help of containerization and Kubernetes we can achieve that easily.
Apache Spark is an essential tool for data engineers and data scientists, offering a robust platform for a variety of applications ranging from large scale data transformation to analytics to machine learning. We are already convinced and adopting containers to improve our workflows by realizing benefits such as packaging of dependencies and creating reproducible artifacts
it makes total sense to run Spark with our rest of the solution already running on top of Kubernetes. Thanks to the Apache Spark and Kubernetes contributors who have put lot of efforts for bringing Apache Spark 2.3 with native Kubernetes support.
Why it’s a big deal?
- Unified platform for entire complicated data pipelines: simplifies cluster management.
- Better utilization of resources.
- All the good things of Kubernetes are readily available to be used with Spark such as Pluggable Authorization and Logging.
- Better allocation of resources across multiple spark application because of great feature called “Namespace” in Kubernetes and resource quotas.
- Future opportunities: managing streaming workloads and leveraging service meshes like Istio.
- How Spark works on Kubernetes: Architecture and internal working.
- Why it’s important for present day applications.
- Use cases.
- Spark future roadmap for Kubernetes.
- Quick demo.
Basic knowledge of Kubernetes and Spark
Shrashti is a Google Cloud certified Data Engineer currently associated with Publicis Sapient. She has worked on multiple engagements with clients from Automobile as well Telecom domain. In current role, she is working on Hyper-personalized recommendation system for Automobile industry focused on Machine Learning in which she is responsible for handling Realtime data processing as well as batch data processing pipelines and extensively worked on Kubernetes for managing overall infrastructure.