Machine learning has been making big strides in a lot of straightforward tasks, such as taking an image and labeling the objects in it. But what if you want an algorithm that can, for example,
generate an image of an object? That's a much vaguer and more difficult request. And it's where generative models come in! We discuss the motivation for making generative models (in addition to making cool images) and how they help us understand the core components of our data. We also get into the specific types of generative models and how they can be trained to create images, text, sound and more. We then move onto the practical concerns that would arise in a world with good generative models: fake videos of politicians, AI assistants making our phone calls, and computer-generated novels. Finally, we connect these ideas to neuroscience, asking both how can neuroscientists make use of these and is the brain a generative model?
We read:
OpenAI blog
MIT Tech Review Article
(accidentally referred to as Wired article...oops!)
And skimmed/mentioned:
Episode 4 - Deep Learning
Another good overview of some generative models
Google Assistant making phone call
Uses of generative models in neuroscience
Episode 33 - Predictive Coding
And our special guest was
Yann Sweeney!
To listen to (or download) this episode, (right) click
here
As always, our jazzy theme music "Quirky Dog" is courtesy of Kevin MacLeod (incompetech.com)
Lena Bagrowska Net Worth
ReplyDelete