GuidedLDA: A Python Package using Semi-Supervised Topic Modelling by Incorporating Lexical Priors
Topic Models have a great potential for helping users understand document corpora. This potential is impeded by their purely unsupervised nature, which often leads to topics that are neither entirely meaningful nor effective in extrinsic tasks. In this talk, I plan to explain how we wrote our own form of Latent Dirichlet Allocation (LDA) in order to guide topic models to learn topics of specific interest to a user. I will also talk about why we proposed a simple and effective solution known as Semi-Supervised Guided Topic Model (GuidedLDA), and the process of open sourcing everything on GitHub.
[0-15mins]: Introduction to Topic Modeling and an intuition to LDA (Latent Dirichlet Allocation) with some business use-cases and an intuitive easy to understand News Article Example.
[15-20mins]: What is Guided LDA we choose Guided LDA? An understanding of the problem of unsupervised regular LDA to shifting to Semi-Supervised GuidedLDA.
[20-30mins]: How does a Generic LDA work? An Overview of the working of Generic LDA using Bayesian Probability and pertinent examples.
[30-35mins]: What Happens when we seed the document? Detailed working Explanation of the GuidedLDA in terms of Bayesian Probability and relevant examples, How it benefits than using generic LDA.
[35-37mins]: Using GuidedLDA. How to use the GuidedLDA Python Package available online on GitHub. Illustrating Sample Code for demonstrative Purposes.
[37-40mins]: Conclude with GuidedLDA stats and Key Takeaways. Motivate the audience, using a small idea that can emerge from anywhere, even from a small startup in Bangalore.
[40-90mins]: Show a small application where we first clean up a publicly available dataset and perform topic-modeling using regular LDA and GuidedLDA.
Parcipants must clone/download the following jupyter-notebook : https://github.com/NThakur20/topic-modeling
Participants must bring their own laptops and should have a basic idea on how to run virtual environments and jupyter-notebooks
I am a perpetual, quick learner and keen to explore the realm of Data Analytics and Science. I am deeply excited about the times we live in and the rate at which data is being generated and being transformed as an asset. I am well versed in domains such as Natural Language Processing, Machine Learning, and Signal Processing and share a keen interest in learning interdisciplinary concepts involving Machine Learning.