Sonia Joseph, a graduate student at MILA who specializes in applying machine learning to neuroscience, dives into the vibrant worlds of NFTs and Web3. She discusses the hurdles of Ethereum's gas fees while highlighting platforms like Polygon. Sonia shares her insights on AI's influence on identity and memory, and critiques the orthogonality thesis, illustrating the risks it poses for AI safety. The conversation veers into philosophical territory as she reflects on the intersection of technology and human aspiration, pondering the very meaning of life.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Sonia Joseph highlights the innovative potential of NFTs as tools for community engagement and funding within creative projects in Web3.
She discusses the asymmetric upside risk of founding companies in the Web3 space, emphasizing the remarkable opportunities amidst high uncertainty and low competition.
Sonia advocates for a diverse discourse in AI safety, suggesting that incorporating varied perspectives can lead to better innovation and understanding of AI risks.
Deep dives
Sonia's Background and Current Projects
Sonia is a graduate student studying multi-agent reinforcement learning and its connections to the brain, alongside her involvement in the Web3 space. She is also in the process of founding a company within this emerging technology sector. Her interests intersect through her writing, where she synthesizes thoughts on AI safety and the implications of mathematics in understanding the mind. Sonia emphasizes the innovative potential of NFTs in this context, exploring how they can crowdsource funding and enhance engagement within creative projects.
Experiments with NFTs and the Web3 Ecosystem
Sonia discusses her experiments with NFTs and how different platforms facilitate their integration into creative works, such as using mirror.xyz. She acknowledges the challenges posed by gas fees when interacting with Ethereum but expresses an interest in exploring the potential impact of NFTs beyond traditional art sales. Sonia believes that the narratives surrounding NFTs often miss their fundamental technological capabilities, which she views as a powerful tool for self-expression and community building. This experimentation provides valuable insights into the new economic structures emerging within the Web3 landscape.
Asymmetric Upside Risk in Web3 Ventures
Sonia shares her perspective on the asymmetric upside risk associated with founding companies in the Web3 space, highlighting the potential rewards that can arise from emerging sectors. She sees an opportunity for innovation due to high uncertainty and limited competition in this rapidly evolving field. The conversation touches upon the dynamics of business risks, where the consequences of failure can be recoverable depending on one's life stage, contrasting this with the potential for groundbreaking success. This evaluation of risk versus reward is crucial for understanding the motivations behind her entrepreneurial pursuits.
AI and Human Intelligence
Sonia articulates her vision of creating human-level general intelligence through the lens of reinforcement learning, emphasizing how studying the brain's mechanisms can inform AI advancements. She delves into the nuances of intelligence, contemplating how goal selection participates in defining what it means to be intelligent. The discussion contrasts traditional views on AI safe development with her own hypothesis that understanding human behavior can create a more beneficial AI landscape. Sonia's exploration leans towards a convergence of human cognitive functions with advanced AI capabilities, thus raising questions about both safety and agency in AI development.
Cultural Shifts in AI Safety Narratives
Sonia expresses a desire for a more ideologically diverse discourse within the AI safety community, emphasizing the need for varied perspectives. She critiques the existing paradigm for being too narrow and disconnected from wider academic and societal conversations, proposing that incorporating insights from financial and governmental sectors might lead to a richer understanding of AI safety. Sonia advocates for an expansive approach to AI safety research, suggesting that a multitude of viewpoints can drive innovation and better inform safety protocols. This cultural shift could induce a more nuanced understanding of both the risks and potentials of AI.
Sonia is a graduate student applying ML to neuroscience at MILA. She was previously applying deep learning to neural data at Janelia, an NLP research engineer at a startup and graduated in computational neuroscience at Princeton University.