Anthill Inside 2017

On theory and concepts in Machine Learning, Deep Learning and Artificial Intelligence. Formerly Deep Learning Conf.



Neural Stack: Augmenting Recurrent Neural Networks with Memory

Submitted Jun 11, 2017

Recurrent neural networks (RNNs) offer a compelling tool for processing natural language input in a straightforward sequential manner. Though they suffer from various limitations which do not allow them to easily model even the simple transduction tasks. In this talk, we will discuss new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues.


NLP Applications

  • Language modelling
  • Machine Translation
  • Parsing

Introduction to RNN [5 mins]

  • RNN architecture

Limitations of a simple RNN model[8 mins]

RNN Equivalence to Finite State Machine[10 mins]

Stack[2 mins]

  • Stack API

Neural stack architechture[10 mins]

  • Neural stack architecture
  • Architecture
  • Operation

Results[5 mins]

  • Performance

Closing remarks [3 mins]

  • To faclitate better understanding of the talk, I will be giving a github repo as a take away so that the audiance can go back, download the code and play with it.

Speaker bio

Satyam Saxena is a ML researcher at Freshdesk. An IIT alumnus, his interest lie in NLP, Machine Learning, Deep Learning. Prior to this, he was a part of ML group Cisco. He was a visiting researcher at Vision Labs in IIIT Hyd where he used computer vision and deep learning to build applications to assisting visually impaired people. He presented some of this work at ICAT 2014, Turkey.




{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}