
London Futurists
Building brain-like AIs, with Alexander Ororbia
Dec 9, 2024
In this engaging discussion, Alexander Ororbia, a professor at the Rochester Institute of Technology and head of the Neural Adaptive Computing Laboratory, shares fresh insights on enhancing AI capabilities beyond mere scaling. He contrasts the efficiencies of biological systems with current AI, proposing biomimetic techniques to address challenges like sparse rewards and catastrophic forgetting. Delving into concepts like 'mortal computation' and the intricacies of consciousness, Ororbia advocates for neuromorphic computing, inviting new perspectives on artificial sentience and cognition.
48:18
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- Alexander Ororbia emphasizes that enhancing AI capabilities may require exploring dynamic computational architectures rather than just scaling existing models and data.
- The podcast discusses how neuromorphic computing can provide energy-efficient solutions by mimicking the structure and functions of biological neural networks.
Deep dives
The Quest for Improved AI
Improving AI systems is often thought to rely solely on scaling existing technologies, such as increasing training data and model sizes. However, some researchers, including Alexander Orobia, argue that there are alternative methodologies worth exploring. Orobia's background in complex creative systems and self-organizing dynamics has led him to investigate how dynamic and stateful computational architectures can enhance AI capabilities. This approach emphasizes understanding the principles from biology that govern memory and identity, suggesting that AI can benefit from a more nuanced framework rather than merely expanding current models.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.