Unsupervised Thinking
a podcast about neuroscience, artificial intelligence and science more broadly

Monday, September 30, 2019

Episode 49: How Important is Learning?

The age-old debate of nature versus nurture is now being played out between artificial intelligence and neuroscience. The dominant approach in AI, machine learning, puts an emphasis on adapting processing to fit the data at hand. Animals, on the other hand, seem to have a lot of built in structure and tendencies, that mean they function well right out of the womb. So are most of our abilities the result of genetically-encoded instructions, honed over generations of evolution? Or are our interactions with the environment key? We discuss the research that has been done on human brain development to try to get at the answers to these questions. We talk about the compromise position that says animals may be "born to learn"---that is, innate tendencies help make sure the right training data is encountered and used efficiently during development. We also get into what all this means for AI and whether machine learning researchers should be learning less. Throughout, we ask if humans are special, argue that development can happen without learning, and discuss the special place of the octopus in the animal kingdom.

Follow special guest Alex Antrobus on twitter

We read:
Functional Brain Development in Humans
A critique of pure learning and what artificial neural networks can learn from animal brains
Weight Agnostic Neural Networks

And we mentioned previous episodes/topics:
Global Science
Training and Diversity in Computational Neuroscience
Studying the Brain in Light of Evolution

To listen to (or download) this episode, (right) click here or use the player below

As always, our jazzy theme music "Quirky Dog" is courtesy of Kevin MacLeod (incompetech.com)

Wednesday, August 28, 2019

Episode 48: Studying the Brain in Light of Evolution

The brain is the result of evolution. A lot of evolution. Most neuroscientists don't really think about this fact. Should we? On this episode we talk about two papers---one focused on brains and the other on AI---that argue that following evolution is the path to success. As part of this argument, they make the point that, in evolution, each stage along the way needs to be fully functional, which impacts the shape and role of the brain. As a result, the system is best thought of as a whole---not chunked into perception, cognition and action, as many psychologists and neuroscientists are wont to do.  In discussing these arguments, we talk about the role of representations in intelligence, go through a bit of the evolution of the nervous system, and remind ourselves that evolution does not necessarily optimize. Throughout, we ask how this take on neuroscience impacts our own work and try to avoid saying "represents".

We read:
Resynthesizing behavior through phylogenetic refinement
Intelligence without Representation

And we mentioned previous episode topics:
The Concept of Coding
Reinforcement Learning, Biological and Artificial

To listen to (or download) this episode, (right) click here or use the player below

As always, our jazzy theme music "Quirky Dog" is courtesy of Kevin MacLeod (incompetech.com)

Tuesday, July 30, 2019

Episode 47: Deep Learning to Understand the Brain

The recent advances in deep learning have done more than just make money for startups and tech companies. They've also infiltrated neuroscience! Deep neural networks---models originally inspired by the basics of the nervous system---are finding ever more applications in the quest to understand the brain. We talk about many of those uses in the episode. After first describing more traditional approaches to modeling behavior, we talk about how neuroscientists compare deep net models to real brains using both performance and neural activity. We then get into the attempts by the field of machine learning to understand their own models and how ML and neuroscience can share methods (and maybe certain cultural tendencies). Finally we talk about the use of deep nets to generate stimuli specifically tailored to drive real neurons to their extremes. Throughout, we notice how deep learning is "complicating the narrative", ask "are deep nets normative models?", and struggle to talk about a topic we actually know about.  

We read:
Deep neural network models of sensory systems: windows onto the role of task constraints
Analyzing biological and artificial neural networks: challenges with opportunities for synergy?
Neural population control via deep image synthesis
Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences

And we mentioned previous episodes:
Deep Learning
"Just-So" Stories of Bayesian Modeling in Psychology
Learning Rules, Biological vs. Artificial
Grace has also written a blog on comparing CNNs to the visual system:
Deep Convolutional Neural Networks as Models of the Visual System: Q&A

Finally, for those who get to the end of the episode, these are the images we're talking about (you decide which ones are pleasant and which are creepy AF....):

To listen to (or download) this episode, (right) click here

As always, our jazzy theme music "Quirky Dog" is courtesy of Kevin MacLeod (incompetech.com)