Unsupervised Thinking
a podcast about neuroscience, artificial intelligence and science more broadly

Tuesday, February 26, 2019

Episode 42: Learning Rules, Biological vs. Artificial

For decades, neuroscientists have explored the ways in which neurons update and control the strength of their connections. For slightly fewer decades, machine learning researchers have been developing ways to train the connections between artificial neurons in their networks. The former endeavour shows us what happens in the brain and the latter shows us what's actually needed to make a system that works. Unfortunately, these two research directions have not settled on the same rules of learning. In this episode we will talk about the attempts to make artificial learning rules more biologically plausible in order to understand how the brain is capable of the powerful learning that it is. In particular, we focus on different models of biologically-plausible backpropagation---the standard method of training artificial neural networks. We start by explaining both backpropagation and biological learning rules (such as spike time dependent plasticity) and the ways in which the two differ. We then describe four different models that tackle how backpropagation could be done by the brain. Throughout, we talk dendrites and cell types and the role of other biological bits and bobs, and ask "should we actually expect to see backprop in the brain?". We end by discussing which of the four options we liked most and why!

We read:
Theories of Error Back-Propagation in the Brain
Dendritic solutions to the credit assignment problem
Control of synaptic plasticity in deep cortical networks (we didn't discuss this one)

And mentioned several topics covered in previous episodes:
Reinforcement Learning
Predictive Coding
The Cerebellum
Neuromorphic Computing
Deep Learning



To listen to (or download) this episode, (right) click here



As always, our jazzy theme music "Quirky Dog" is courtesy of Kevin MacLeod (incompetech.com)

1 comment:

  1. Wow, that was super cool. One thing that comes across is that biological learning might be just crazy complex, with dozens of interacting neuromodulators, dendrite types, architectures, and so forth - almost like evolution is a billion year hack of optimization on top of optimization... well, I guess that's why there's so many neuroscience postdocs ;)

    ReplyDelete