Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Make a submission
Accepting submissions till 15 Jun 2019, 01:00 PM
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Accepting submissions till 15 Jun 2019, 01:00 PM
##The eighth edition of The Fifth Elephant will be held in Bangalore on 25 and 26 July. A thousand data scientists, ML engineers, data engineers and analysts will gather at the NIMHANS Convention Centre in Bangalore to discuss:
##Highlights:
1. Meet Peter Wang, co-founder of Anaconda Inc, and learn about why data privacy is the first step towards robust data management; the journey of building Anaconda; and Anaconda in enterprise.
2. Talk to the Fulfillment and Supply Group (FSG) team from Flipkart, and learn about their work with platform engineering where ground truths are the source of data.
3. Attend tutorials on Deep Learning with RedisAI; TransmorgifyAI, Salesforce’s open source AutoML.
4. Discuss interesting problems to solve with data science in agriculture, SaaS perspective on multi-tenancy in Machine Learning (with the Freshworks team), bias in intent classification and recommendations.
5. Meet data science, data engineering and product teams from sponsoring companies to understand how they are handling data and leveraging intelligence from data to solve interesting problems.
##Why you should attend?
##Full schedule published here: https://hasgeek.com/fifthelephant/2019/schedule
##Contact details:
For more information about The Fifth Elephant, sponsorships, or any other information call +91-7676332020 or email info@hasgeek.com
#Sponsors:
Sponsorship Deck.
Email sales@hasgeek.com for bulk ticket purchases, and sponsoring 2019 edition of JSFoo:VueDay.
#Platinum Sponsor
#Community Sponsors
#Exhibition Sponsors
#Bronze Sponsor
#Community Sponsors
Hosted by
Jaidev Deshpande
@jaidevd
Submitted Apr 21, 2019
As technology gets cheaper and more available, we start taking it for granted. It’s easier than ever before to perform fairly exciting AI tasks with as little as tens of lines of code. As data grows, our approach to ML problems often, and understandably, becomes haphazard. As GPUs become more widely available, we subconsciously think that throwing enough artificial neurons at a problem will eventually solve it. Whether this will actually solve a problem or not, it is not uncommon for data scientists to realize - unfortunately only in hindsight - that most of the iterations required to build a successful model were unnecessary. Ironically, these ‘missteps’ are essential to finding the right solution. Solving an ML problem is like traversing a minefield where the safest path can only be determined by blowing up a lot of mines. We can only find the solution by making mistakes. Unfortunately, this ‘see what sticks’ approach cannot be completely avoided. However it can be curbed significantly, with a structured approach to running machine learning experiments. This structured approach - its theory and practice, its design principles and its software implementation - is what this talk is about.
This talk is inspired, in part, by Peter Bull & Isaac Slavitt’s SciPy 2016 tutorial on Developer Lifehacks for the Jupyter Data Scientist. Bull & Slavitt’s tutorial is a result of their effort towards trying to bring best practices from software engineering to data science. In the same spirit, this talk expands on this idea by:
Most ML problems are, by design, highly iterative. Therefore they can, at least in theory, be automated. However, the lack of a structured workflow prevents us from exploiting the redundancies in ML practice. The ideal way of managing machine learning experiments is with a lab journal. Each machine learning experiment can be reasonably characterized by a hypothesis, a procedure and finally drawing inferences from it’s results. A well kept journal would help practitioners avoid repeating mistakes, and narrowing down to the right approach.
This talk will introduce Kepler, a fully open-source framework for managing ML experiments. Kepler is written in Python, and optimized for deep learning experiments. It runs sanity checks on models and data which enforce the idea that training models should not begin until the model is sane enough, and the data is properly prepared. It enforces the DRY principle by keeping track of performance metrics across multiple experiments. It allows users to log experiments carried out on sklearn estimators and keras models. It also behaves like a hyperparameter grid manager, which alerts the user if the user accidentally re-runs the same experiment on the same data with the same parameters. It has some meta-learning features which allow for an end-to-end approach to machine learning problems. Ultimately, it provides a searchable interface for all projects, models and experiments under its umbrella - facilitating the design and automation of efficient ML workflows.
Jaidev is senior data scientist at Gramener, where he work on building products for other data scientists. His interests lie at the intersection of data science, software engineering and continuous integration. He’s an active contributor to the scientific Python stack, and loves to apply machine learning and analytics to personal productivity. You’re likely to run into him at FOSS community events.
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Accepting submissions till 15 Jun 2019, 01:00 PM
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}