In Season 2, Episode 22 of the podcast, Geoff Hinton, a pioneer in deep learning, discusses the revolutionizing potential of artificial intelligence, his journey in academia, and his visions for the future of neural networks. Hinton is credited with the breakthroughs that drive contemporary AI in fields such as computer vision, speech recognition, machine translation, robotics, medicine, and computational biology.
Hinton's work in deep learning dates back half a century, but the field experienced a significant turning point with the 2012 ImageNet moment. It was then that Hinton demonstrated the superiority of deep learning for image recognition when compared to other computer vision approaches. His research on the brain played a crucial role in the development of AlexNet, a convolutional neural network that built on Yann LeCun's foundational work.
Hinton explains that while the current engineering approach to AI, including convolutional networks and transformers, is efficient, they may not be the most biologically plausible. Instead, he suggests that distillation of knowledge from one location to another could be a more brain-like alternative. However, understanding spiking neurons and their use in the neocortex remains a challenge, and unlocking this mystery could lead to more energy-efficient hardware.
During the podcast, Hinton delves into his transition from academia to working at Google, which was prompted by his frustration with how universities handle financial matters related to his Coursera course. He also shares his thoughts on the potential future of neural networks, envisioning "mortal computers" – devices that are grown, learn independently, use low energy, and pass along their knowledge before they expire.
Hinton believes that systems with many parameters, tuned sensibly with gradient descent in a sensible objective function, can exhibit wondrous emergent properties, as seen with programs like GPT-3. He further speaks about his paper "Glom," which addresses part-whole hierarchies in neural nets and symbolic computation, and his views on GPT-3's understanding and how it goes beyond simple symbol string processing. Additionally, Hinton discusses the concept of "Student Beats Teacher" and the development of t-SNE, a technique for visualizing high-dimensional data.
Overall, this podcast offers an engaging and insightful exploration of the past, present, and future of artificial intelligence and deep learning, as seen through the eyes of one of its most influential pioneers, Geoff Hinton.