Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Gary Marcus' keynote at AGI-24

Aug 17, 2024
In this discussion, Gary Marcus, a renowned professor and AI expert, critiques the current state of large language models and generative AI, highlighting their unreliability and tendency to hallucinate. He argues that merely scaling data won't lead us to AGI and proposes a hybrid AI approach that integrates deep learning with symbolic reasoning. Marcus voices concerns about the ethical implications of AI deployment and predicts a potential 'AI winter' due to overhyped technologies and inadequate regulation, emphasizing the necessity for deeper conceptual understanding in AI.
30:38

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Gary Marcus critiques the unreliability of large language models, emphasizing their inability to understand fundamental concepts like space and time.
  • The conversation highlights the financial struggles faced by AI companies, which must establish viable business models to sustain their operations amid inflated valuations.

Deep dives

Diverse Risks of AI Management

The conversation underscores the multifaceted risks associated with artificial intelligence, emphasizing that there is no singular solution to these challenges. It advocates for an agile management approach as AI evolves continuously, indicating that advancements can emerge at any time, regardless of current technology stagnation. The need for transparency is also highlighted, particularly regarding the training data used for AI models, suggesting that without clear accountability, it's impossible to mitigate bias and errors. This implies that as AI technology progresses, continuous adaptability and oversight are necessary to navigate its increasing complexities.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner