Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Make a submission
Accepting submissions till 15 Jun 2019, 01:00 PM
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Accepting submissions till 15 Jun 2019, 01:00 PM
1. Meet Peter Wang, co-founder of Anaconda Inc, and learn about why data privacy is the first step towards robust data management; the journey of building Anaconda; and Anaconda in enterprise.
2. Talk to the Fulfillment and Supply Group (FSG) team from Flipkart, and learn about their work with platform engineering where ground truths are the source of data.
3. Attend tutorials on Deep Learning with RedisAI; TransmorgifyAI, Salesforce’s open source AutoML.
4. Discuss interesting problems to solve with data science in agriculture, SaaS perspective on multi-tenancy in Machine Learning (with the Freshworks team), bias in intent classification and recommendations.
5. Meet data science, data engineering and product teams from sponsoring companies to understand how they are handling data and leveraging intelligence from data to solve interesting problems.
For more information about The Fifth Elephant, sponsorships, or any other information call +91-7676332020 or email info@hasgeek.com
Sponsorship Deck.
Email sales@hasgeek.com for bulk ticket purchases, and sponsoring 2019 edition of JSFoo:VueDay.
Hosted by
Shashank Jaiswal
@shashank94
Submitted Apr 8, 2019
Deep Learning based models have achieved high accuracy on Named Entity Recognition tasks for natural language datasets. However, their efficacy on practical domain-specific data, like product titles, is often subpar due to several challenges - 1) labeled data is scarce or unavailable; 2) noise in the form of spelling errors, missing tokens, abbreviations etc.; 3) variance in structure (as it is not a natural language, hence no grammar); 4) manual labelling is costly. In this talk, I will talk about how at Clustr, we leveraged an existing sparse Knowledge Graph to generate a set of weak labeled seed data and used it to bootstrap a deep Recurrent Neural Network-based sequence labelling model. Further, we build upon the concepts of Active Learning to iteratively train our model with minimal amount of manual labelling. The key takeaways of the talk would be 1) how to deal with similar problems with availability of training data (even with different category of data e.g. images, sensor data etc.), 2) understanding why Deep neural network architecture can generalise very easily if used correctly; and 3) we would describe how active learning is a promising paradigm in building practical machine learning-based solutions to domain-specific problems riddled with scare labeled data
We would be presenting answers to the following…
Industry grade DNN based Advance Named Entity Recognition Module with Active Learning.
Why just any generic approach would not have worked?
How our data source and structure left us with no previously adapted choices?
And how did we tackle a number of problems on the way?
Following are the primary product-goals of the company which are relevant to ADAM project.:
1)Universal Product Catalog
2)Aggregation and Market Analysis
3)Self evolving Knowledge Graph
Introduction, Structure and The good, bad and the ugly of the data-set.
Definition,
Usecases::
1) Enrichment of Knowledge graph
2) For Analytics
1)Smart Automatic Training Data Generation
2)State of the Art Sequence Tagging Model
3)Active Learning approach
1)Zero ground truth and no training data available whatsoever.
2)Multi-Independent Source of data generation thus imagine the variance
3)Short representations and extremely noisy
4)Prone to Extreme human error (not bias but error!!!)
<>How we leveraged the structure of dataset (Stock-item and Stock-group)?
<>How we used the existing knowledge-base AKA (CREGS)?
<>How we improvised using the information from other sources like Amazon and GS1?
<>Why we created our own word embeddings and how it helped us?
<>Why BiLSTM and CRF were used and why are they state of the art?
<>Why this specific architecture was needed and why anything else wouldn’t work
<>What was the accuracy and how well did the model performed?
<>Why Active learning when we can generate labels automatically?
<>How we integrated and designed Manual annotation model ourselves?
<>How well did we reach maturity with the minimum data-points manually labelled?
<>Why extrinsic sampling or intrinsic sampling used?
1)How to tackle the noisy data problem in case of textual data?
2)Why a deep NN model plays an important role in generalisation?
3)Why Active Learning is a really important concept for dealing with the problem of no label data?
IIT Roorkee Grad. (Batch 2017)
Data Scientist (Exp: 1.8 yrs at Clustr, Tally Analytics pvt. ltd.)
I have been a part of the Data Science team at Clustr.
I have worked on some innovative projects which involved skills on Deep-Learning, Machine Learning, complex data-structure and dynamic programming algorithms.
I have been the primary owner of ADAM project and have successfully converted it from a problem statement to working solution. I brain-stromed, coded and tackled all the problems faced while the journey of ADAM.
https://drive.google.com/file/d/16RS2visltwQVUZsczqF796oxl0hs3dlZ/view?usp=sharing
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Accepting submissions till 15 Jun 2019, 01:00 PM
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}