Debugging deep nets
Deep learning networks are typically large neural networks with very complex designs containing millions of neurons . The number of parameters to be learned in case of these networks is huge. Finding the right set of parameters is a non-trivial task and requires good amount of experience. You can run into all sort of problems such as exploding gradients, infinite losses, overfitting etc. In this talk, I’ll be addressing such issues and how to deal with them. I’ll also talk about a few tools that can help you in properly tuning the parameters for a deep neural network.
1) The importance of data used in training and testing the system
2) The effect of weight initialization and a few tricks for faster convergence
3) Handling infinite losses and vanishing gradients
4) Handling overfitting
5) Useful tools for debugging
6) Some best practices while training the deep nets
Basic understanding of how Deep Neural networks work, just the feed forward part will do.
I’m a co-founder and Head of Research at Snapshopr. You can know more about me and Snapshopr @ https://in.linkedin.com/in/vivek-gandhi-565b747a, https://www.linkedin.com/company/aincubate and http://snapshopr.co/