The eighth edition of The Fifth Elephant will be held in Bangalore on 25 and 26 July. A thousand data scientists, ML engineers, data engineers and analysts will gather at the NIMHANS Convention Centre in Bangalore to discuss:
- Model management, including data cleaning, instrumentation and productionizing data science.
- Bad data and case studies of failure in building data products.
- Identifying and handling fraud + data security at scale
- Applications of data science in agriculture, media and marketing, supply chain, geo-location, SaaS and e-commerce.
- Feature engineering and ML platforms.
- What it takes to create data-driven cultures in organizations of different scales.
1. Meet Peter Wang, co-founder of Anaconda Inc, and learn about why data privacy is the first step towards robust data management; the journey of building Anaconda; and Anaconda in enterprise.
2. Talk to the Fulfillment and Supply Group (FSG) team from Flipkart, and learn about their work with platform engineering where ground truths are the source of data.
3. Attend tutorials on Deep Learning with RedisAI; TransmorgifyAI, Salesforce’s open source AutoML.
4. Discuss interesting problems to solve with data science in agriculture, SaaS perspective on multi-tenancy in Machine Learning (with the Freshworks team), bias in intent classification and recommendations.
5. Meet data science, data engineering and product teams from sponsoring companies to understand how they are handling data and leveraging intelligence from data to solve interesting problems.
Why you should attend?
- Network with peers and practitioners from the data ecosystem
- Share approaches to solving expensive problems such as cleanliness of training data, model management and versioning data
- Demo your ideas in the demo session
- Join Birds of Feather (BOF) sessions to have productive discussions on focussed topics. Or, start your own Birds of Feather (BOF) session.
Full schedule published here: https://hasgeek.com/fifthelephant/2019/schedule
For more information about The Fifth Elephant, sponsorships, or any other information call +91-7676332020 or email firstname.lastname@example.org
JSFoo:VueDay 2019 sponsors:
Building Robust, Reliable Data Pipelines
Session type: Short talk of 20 mins
This talk is about sharing our learnings and some best practices we have built over the years working with massive volume and every changing schema of data.
What we are not going to discuss is specifics of what actually technological choices we made. Or, how we scaled out system 10x year on year. Or, how we brought down the latency in processing of our data to half.
Zapr has profiled millions of users for tv consumption and in part we had to build our data processing pipeline from scratch. Initially, We started off doing it all the wrong way first by adding fields over time. As a results it came to a point where it was impossible to manage or keep track of fields that were present in data. Core concept in this talk is around how we should model the data flowing in the pipelines and the advantages it gives both from a Business as well as a technical perspective.
This talk should help anyone new into building data processing pipelines in their organization to be future proof and vary of pitfalls when dealing with data schemas which are evolving
Folks who are already doing it and have built expertise around it will be able to relate and get another perspective on how to manage data flowing in their pipelines
The flow would look like this
- The Need for a Message Bus in building a data processing pipeline
- For the events generated in the Message Bus, the need for a contract for data control (with examples of showing how we messed up and learnt from it).
- explain in more detail of what a contract is
- how it can be implemented
- starts with hierarchical modeling of data. relations between objects
- what are tools other there to store this complex relationship between entites
Discuss the gains from implementing contract control for any data that flows in the data pipeline
- from a business perspective of improving business logic, joining with other data sets
- from a technological ease -
- Schema extendibility of fields in data,
- predictability of development,
- back dated processing - backward and forward compatibility
- Able to break down pipeline by responsibility - teams can work on different component of the pipeline - Implementing the above for multi step data processing (enrichment)
* Cost wise
* Data cleaning
* Data consistency
* Linear pipeline
I work as a Tech Architect at zapr. Working closely with data engineering teams and more specifically drive initiatives to help improve the quality of the data. In my spare time i like to read about how lot of different organizations are solving new type of problems, listen to lot of podcasts and watch football