ADAM - Bootstrapping a Deep Neural Network Sequence Labeling Model with minimal labeling
Deep Learning based models have achieved high accuracy on Named Entity Recognition tasks for natural language datasets. However, their efficacy on practical domain-specific data, like product titles, is often subpar due to several challenges - 1) labeled data is scarce or unavailable; 2) noise in the form of spelling errors, missing tokens, abbreviations etc.; 3) variance in structure (as it is not a natural language, hence no grammar); 4) manual labelling is costly. In this talk, I will talk about how at Clustr, we leveraged an existing sparse Knowledge Graph to generate a set of weak labeled seed data and used it to bootstrap a deep Recurrent Neural Network-based sequence labelling model. Further, we build upon the concepts of Active Learning to iteratively train our model with minimal amount of manual labelling. The key takeaways of the talk would be 1) how to deal with similar problems with availability of training data (even with different category of data e.g. images, sensor data etc.), 2) understanding why Deep neural network architecture can generalise very easily if used correctly; and 3) we would describe how active learning is a promising paradigm in building practical machine learning-based solutions to domain-specific problems riddled with scare labeled data
We would be presenting answers to the following…
Industry grade DNN based Advance Named Entity Recognition Module with Active Learning.
Why just any generic approach would not have worked?
How our data source and structure left us with no previously adapted choices?
And how did we tackle a number of problems on the way?
Following are the primary product-goals of the company which are relevant to ADAM project.:
1)Universal Product Catalog
2)Aggregation and Market Analysis
3)Self evolving Knowledge Graph
Introduction, Structure and The good, bad and the ugly of the data-set.
WHY ADAM (Automatic Detection and Annotation Module)… A deep NN model:
1) Enrichment of Knowledge graph
2) For Analytics
Components of ADAM:
1)Smart Automatic Training Data Generation
2)State of the Art Sequence Tagging Model
3)Active Learning approach
Why the above architecture is chosen:
1)Zero ground truth and no training data available whatsoever.
2)Multi-Independent Source of data generation thus imagine the variance
3)Short representations and extremely noisy
4)Prone to Extreme human error (not bias but error!!!)
Finally details of the architecture and WHY they were necessary:
1)Smart Automatic Training Data Generation:
<>How we leveraged the structure of dataset (Stock-item and Stock-group)?
<>How we used the existing knowledge-base AKA (CREGS)?
<>How we improvised using the information from other sources like Amazon and GS1?
2)State of the Art Sequence Tagging Model:
<>Why we created our own word embeddings and how it helped us?
<>Why BiLSTM and CRF were used and why are they state of the art?
<>Why this specific architecture was needed and why anything else wouldn’t work
<>What was the accuracy and how well did the model performed?
<>Why Active learning when we can generate labels automatically?
<>How we integrated and designed Manual annotation model ourselves?
<>How well did we reach maturity with the minimum data-points manually labelled?
<>Why extrinsic sampling or intrinsic sampling used?
Conclusion that we will showcase as per 5th Elephant:
1)How to tackle the noisy data problem in case of textual data?
2)Why a deep NN model plays an important role in generalisation?
3)Why Active Learning is a really important concept for dealing with the problem of no label data?
IIT Roorkee Grad. (Batch 2017)
Data Scientist (Exp: 1.8 yrs at Clustr, Tally Analytics pvt. ltd.)
I have been a part of the Data Science team at Clustr.
I have worked on some innovative projects which involved skills on Deep-Learning, Machine Learning, complex data-structure and dynamic programming algorithms.
I have been the primary owner of ADAM project and have successfully converted it from a problem statement to working solution. I brain-stromed, coded and tackled all the problems faced while the journey of ADAM.