iCASSTLE: Imbalanced Classification Algorithm for Semi Supervised Text Learning
Information in the form of text can be found in abundance in the web today, which can be mined to solve multifarious problems. Customer reviews, for instance, flow in across multiple sources in thousands per day which can be leveraged to obtain several insights. Our goal is to extract cases of a rare event e.g., recall of products, allegations of ethics or, legal concerns or, threats to product-safety, etc. from this enormous amount of data. Manual identification of such cases to be reported is extremely labour-intensive as well as time-sensitive, but failure to do so can have fatal impact on the industry’s overall health and dependability; missing out on even a single case may lead to huge penalties in terms of customer experience, product liability and industry reputation. In this paper, we will discuss text classification through Positive and Unlabeled data i.e., PU classification, where the only class, for which training instances are available, is a rare event. In iCASSTLE, we propose a two-staged approach where Stage I leverages three unique components of text mining to procure representative training data containing instances of both classes in the right proportion, and Stage II uses results from Stage I to run a semi-supervised classification. We applied this to multiple datasets differing in nature of Product Safety as well as nature of imbalance and iCASSTLE is proven to perform better than the state-of-the-art methods for the relevant use-cases.
Keywords: Text Mining, PU Classification, Semi Supervised Text Classification, Sentiment Analysis, Latent Semantic Analysis, Word Frequency, Sparsity Treatment, GloVe, Class Imbalance, Recall Maximization, Data Prioritization
The session will kick off with the concept of Rare Events and how it differs from quintessential Anomalies.
Discuss the concept of One Class Classification in the PU Set Up, and its challenges in presence of imbalance.
Basics of Text Classification : Word To Vec, Sparsity Treatment, LSA, GloVe. This is where concepts of Matrix Factorization might come in handy.
Discuss basic Semi Supervised Classification : the What and the Why. This is where concept of Entropy comes in handy.
Discuss detailed metholdology of iCASSTLE in an illustrative problem premise.
Illustrate comparative performance on simulated as well as real datasets.
Illustrate metrics to gauge continued performance.
Impact and Next Steps
Discuss benefits and generalization followed by areas of improvements.
It is advised that the audience be well versed in the basics of Text Classification. If not, we will try to cover it. The audience should be comfortable with concepts of linear algebra, matrix factorization and regression.
Speaker : Mainak Mitra, Senior Data Scientist, Walmart Labs|Enterprise Data Science
- iCASSTLE IEEE Xplore: https://ieeexplore.ieee.org/document/8614190?fbclid=IwAR3zurmx6f-wrJOlzgue7Sau18n8gzBRR3YlbFxjw3fgQzoSmYArAtYMAU0