Neural Stack: Augmenting Recurrent Neural Networks with Memory
SATYAM SAXENA
@sam89
Recurrent neural networks (RNNs) offer a compelling tool for processing natural language input in a straightforward sequential manner. Though they suffer from various limitations which do not allow them to easily model even the simple transduction tasks. In this talk, we will discuss new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues.
Outline
NLP Applications
- Language modelling
- Machine Translation
- Parsing
Introduction to RNN [5 mins]
- RNN architecture
Limitations of a simple RNN model[8 mins]
RNN Equivalence to Finite State Machine[10 mins]
Stack[2 mins]
- Stack API
Neural stack architechture[10 mins]
- Neural stack architecture
- Architecture
- Operation
Results[5 mins]
- Performance
Closing remarks [3 mins]
- To faclitate better understanding of the talk, I will be giving a github repo as a take away so that the audiance can go back, download the code and play with it.
Speaker bio
Satyam Saxena is a ML researcher at Freshdesk. An IIT alumnus, his interest lie in NLP, Machine Learning, Deep Learning. Prior to this, he was a part of ML group Cisco. He was a visiting researcher at Vision Labs in IIIT Hyd where he used computer vision and deep learning to build applications to assisting visually impaired people. He presented some of this work at ICAT 2014, Turkey. https://www.linkedin.com/in/sam-iitj/
Links
- Code associated with this talk: https://github.com/sam-iitj/NeuralStack
Slides
https://docs.google.com/presentation/d/17UOEgy7dq6rdJZ0c63dDo6Fj148d-iYk5ghvXDj3Sqk/edit?usp=sharing
{{ errorMsg }}