The eighth edition of The Fifth Elephant will be held in Bangalore on 25 and 26 July. A thousand data scientists, ML engineers, data engineers and analysts will gather at the NIMHANS Convention Centre in Bangalore to discuss:
- Model management, including data cleaning, instrumentation and productionizing data science.
- Bad data and case studies of failure in building data products.
- Identifying and handling fraud + data security at scale
- Applications of data science in agriculture, media and marketing, supply chain, geo-location, SaaS and e-commerce.
- Feature engineering and ML platforms.
- What it takes to create data-driven cultures in organizations of different scales.
1. Meet Peter Wang, co-founder of Anaconda Inc, and learn about why data privacy is the first step towards robust data management; the journey of building Anaconda; and Anaconda in enterprise.
2. Talk to the Fulfillment and Supply Group (FSG) team from Flipkart, and learn about their work with platform engineering where ground truths are the source of data.
3. Attend tutorials on Deep Learning with RedisAI; TransmorgifyAI, Salesforce’s open source AutoML.
4. Discuss interesting problems to solve with data science in agriculture, SaaS perspective on multi-tenancy in Machine Learning (with the Freshworks team), bias in intent classification and recommendations.
5. Meet data science, data engineering and product teams from sponsoring companies to understand how they are handling data and leveraging intelligence from data to solve interesting problems.
Why you should attend?
- Network with peers and practitioners from the data ecosystem
- Share approaches to solving expensive problems such as cleanliness of training data, model management and versioning data
- Demo your ideas in the demo session
- Join Birds of Feather (BOF) sessions to have productive discussions on focussed topics. Or, start your own Birds of Feather (BOF) session.
Full schedule published here: https://hasgeek.com/fifthelephant/2019/schedule
For more information about The Fifth Elephant, sponsorships, or any other information call +91-7676332020 or email email@example.com
JSFoo:VueDay 2019 sponsors:
The final stage of grief (about bad data) is acceptance
Session type: Full talk of 40 mins
Over the course of my career I’ve gone through the many stages of grief; I’ve become angry at the poor quality of my data, I’ve attempted to bargain with engineering/PMs/etc for better data, and I became depressed over the issue. Now I’ve reached the final stage; I accept that my data is bad. Given that my data is bad, I then attempt to model it’s badness, and use that model to correct for the biases introduced.
In this talk I’ll discuss how I approach bad data; I accept that I cannot fix it and instead try to model where it came from. This usually involves getting a more detailed grasp of the data generating process and writing down a formal model.
In many cases this enables me to use the data model to correct and enhance my predictive model, as well as provide useful measurements and insights for improving and repairing the data collection process.
This talk is about bad data, and how to deal with it. It is NOT about improving data collection, correcting broken data, monitoring, etc.
To start with I’ll discuss the problem of data that is collected incorrectly, and focus on a couple of examples:
- Data that is randomly missing, perhaps due to a malfunctioning tracking pixel.
- Data that is mislabelled, perhaps due to data collection partners who use slightly different processes to collect it.
I’ll discuss particular fixes for these, both to correct for biases introduced by incorrect data, and to understand how bad the data collection is.
Then I’ll move on to data which is fundamentally bad.
The first example I’ll cover is delayed reactions. When measuring ad clicks, the time between display and click is nearly instantaneous (minutes at most). When measuring clicks on links contained in an email, the time can be quite significant (days). The same is true for many relevant scenarios, including debt collection (e.g. at Simpl it takes 30 days to know if a user is delinquent).
I’ll discuss a technique for modeling and correcting for the bias introduced by delays, which comes by modeling the delay via survival models.
The second example I’ll cover is selection bias caused by using your model. In particular, I’ll show discuss why height appears to be uncorrelated with player performance in the National Basketball Association and why GRE does not seem to predict academic performance among students admitted to graduate school.
The conclusion I want everyone to take away from this is that bad data is not a show stopper. It’s also not something you’re helpless to do anything about. Rather, it’s an obstacle, but one that can be overcome (or at least mitigated) with careful modeling.
It would be useful to be familiar with Bayes rule and a bit of linear algebra.
Chris is currently the head of data science at Simpl, India’s top Pay Later platform. In past lives he’s been a physicist, a high frequency stock trader, an automated marketer, a bodyguard and a nootropic drug courier. He’s a strong believer in correct statistics, clean code, and putting skin in the game to demonstrate your beliefs.
- AI Ethics, Impossibility Theorems and Tradeoffs - CrunchConf 2018 - https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf
- Bayesian Linear Regression and Generalized Linear Models - Fifth Elephant 2018 - https://www.chrisstucchio.com/pubs/slides/fifth_elephant_2018/slides2.html#1
- How to Change Your Opinion with Bayes Rule - PyDelhi 2017 - https://www.chrisstucchio.com/pubs/slides/pydelhi_2017/slides.html#1