Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Amit Kapoor
Struggling to unpack the plethora of learning paradigms in ML? Let us have a dialogue to both understand them better and build a better mental model to explain them to everyone.
It starts all simple. You predict the price in Boston housing data and understand that this is Supervised Learning. You reduce the dimensions in the Iris flower data and understand that this is Unsupervised Learning. Then you want to know how the Chess & Go data was used and understand there is Reinforcement Learning.
You move to time-series data and now you have something like Auto-Regressive learning. Or dabble in text data, and try to get your head around all these word vectors and language models. Soon you are reading about an alphabet soup of suffix-paradigm-Learning – Semi-Supervised, Self-Supervised, Weak-Supervised ... and now you are struggling to make sense of it all. Throw in a bit of statistical model literature: Generative Learning vs. Discriminative Learning, Frequentist Learning vs. Bayesian Learning and it no longer looks simple anymore.
This BoF session is to have an open dialogue on the Learning Paradigms and start to unpack them to build a better mental model around them. Some of questions we would be keen to unpack are:
If you have been doing Machine Learning for a while, and have been struggling with a mental model to understand and explain the Machine Learning paradigms, then this is a great session to come participate and share your challenges and perspective on this.
However, if you have just started your learning journey in Machine Learning, then it is possible the session would leave you with more questions than answers. But then again it may serve as good initial weights for your own learning model!
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}