Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Make a submission
Accepting submissions till 01 Nov 2019, 04:20 PM
Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Accepting submissions till 01 Nov 2019, 04:20 PM
##About the 2019 edition:
The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule
The conference has three tracks:
#Who should attend Anthill Inside:
Anthill Inside is a platform for:
For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com
#Sponsors:
Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.
#Bronze Sponsor
#Community Sponsor
Hosted by
Debanjana Banerjee
@debanjana
Submitted Jun 15, 2019
Information in the form of text can be found in abundance in the web today, which can be mined to solve multifarious problems. Customer reviews, for instance, flow in across multiple sources in thousands per day which can be leveraged to obtain several insights. Our goal is to extract cases of a rare event e.g., recall of products, allegations of ethics or, legal concerns or, threats to product-safety, etc. from this enormous amount of data. Manual identification of such cases to be reported is extremely labour-intensive as well as time-sensitive, but failure to do so can have fatal impact on the industry’s overall health and dependability; missing out on even a single case may lead to huge penalties in terms of customer experience, product liability and industry reputation. In this paper, we will discuss text classification through Positive and Unlabeled data i.e., PU classification, where the only class, for which training instances are available, is a rare event. In iCASSTLE, we propose a two-staged approach where Stage I leverages three unique components of text mining to procure representative training data containing instances of both classes in the right proportion, and Stage II uses results from Stage I to run a semi-supervised classification. We applied this to multiple datasets differing in nature of Product Safety as well as nature of imbalance and iCASSTLE is proven to perform better than the state-of-the-art methods for the relevant use-cases.
Keywords: Text Mining, PU Classification, Semi Supervised Text Classification, Sentiment Analysis, Latent Semantic Analysis, Word Frequency, Sparsity Treatment, GloVe, Class Imbalance, Recall Maximization, Data Prioritization
Introduction
The session will kick off with the concept of Rare Events and how it differs from quintessential Anomalies.
Discuss the concept of One Class Classification in the PU Set Up, and its challenges in presence of imbalance.
Basics of Text Classification : Word To Vec, Sparsity Treatment, LSA, GloVe. This is where concepts of Matrix Factorization might come in handy.
Discuss basic Semi Supervised Classification : the What and the Why. This is where concept of Entropy comes in handy.
Methodology
Discuss detailed metholdology of iCASSTLE in an illustrative problem premise.
Experiment
Illustrate comparative performance on simulated as well as real datasets.
Illustrate metrics to gauge continued performance.
Impact and Next Steps
Discuss benefits and generalization followed by areas of improvements.
It is advised that the audience be well versed in the basics of Text Classification. If not, we will try to cover it. The audience should be comfortable with concepts of linear algebra, matrix factorization and regression.
Speaker : Mainak Mitra, Senior Data Scientist, Walmart Labs|Enterprise Data Science
Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Accepting submissions till 01 Nov 2019, 04:20 PM
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}