Keeping Moore's law alive: Neuromorphic computing
Submitted by Anand Chandrasekaran (@madstreetden) on Monday, 15 June 2015
Section: Full Talk Technical level: Beginner
This talk explores the implications of Neuromorphic Engineering, or ‘building brains in silicon’, on the development of extremely parallel compute techniques such as deep learning.
Moore’s law is a term coined by Carver Mead, a Caltech professor who is also the father of Neuromorphic Engineering. It refers to the observation, now more hope than reality, that advances in technology will allow a doubling of compute capability in silicon every 18 months. Recent advances in the use of highly parallel compute methods, that are loosely based on neural systems in our brain, are changing how compute is accomplished. These techniques, collectively termed deep learning networks, burst onto to the world because of one reason: the ability to perform lots of parallel computations on graphics cards. However, it is in truly custom hardware, such as that pioneered by the Neuromorphic community that we will find the salvation of Moore’s law. When we blend powerful compute techniques with custom silicon architectures, we can keep the hope alive of continuing to double the compute capability of the world.
If you are in the space of deep learning or have heard about how GPUs have revolutionalized high performance computing, this talk will take you to the extreme bleeding edge of that world.
None, I will keep transistor physics out of this.
The speaker was one of the creators of Neurogrid, a system built in Stanford that until recently was the largest Neuromorphic system in the world. He is also the CTO and Founder of Mad Street Den, a computer vision and AI startup based out of Chennai.