In a captivating discussion with Jad Tarifi, founder of Integral AI and former chief of Google's first Generative AI team, we explore Japan's surprising advantages in the AI race. Jad shares insights on why robotics is crucial for AI advancement, critiques the humanoid robot trend, and offers strategies for startups to effectively compete against major players. He also delves into the philosophical aspects of AGI development, the alignment problem, and the importance of ensuring AI aligns with human values for a benevolent future.
51:44
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Integral AI's Approach
Integral AI's approach to generative AI models is inspired by the neocortex's hierarchical design.
Different layers produce abstractions, feeding into higher layers creating abstractions of abstractions, increasing efficiency.
insights INSIGHT
Test Time Scale
LLMs face scaling issues where increased resources yield diminishing returns.
Jad Tarifi suggests exploring 'test time scale,' where models generate their own high-quality data through reasoning and planning.
volunteer_activism ADVICE
Finding Talent
Be open-minded about location when seeking engineering talent.
Jad Tarifi found underappreciated talent in Tokyo and emphasizes the value of high energy and integrity over experience in startups.
Get the Snipd Podcast app to discover more snips from this episode
Japan is lagging behind in AI, but that might not be the case for long.
Today we sit down with Jad Tarifi, current founder of Integral AI and previously, founder of Google’s first Generative AI team, and we talk about some of Japan's potential advantages in AI, the most likely path to AGI, and how small AI startups can compete against the over-funded AI giants.
It's a great conversation, and I think you'll enjoy it.
Show Notes
Why Jad felt Google was not pursuing the best path toward AGI
The fundamental AI scaling problem and likely solutions
Why robotics is critical for the advancement of AI (and the not the other way around)
Why Japan is the ideal place to build a new AI startup
The reason it is so difficult for robotics startups to make money
Why humanoid robots are a dead-end
How AI startups can compete with the foundation-model comnpanies
How we get to AGI from our current AI
Solutions to the alignment problem
The challenge of making AI fundamentally benevolent
The biggest challenge in AI development is not technological
Links from our Guest
Everything you ever wanted to know about Integral AI
Stream product announcement
Follow Jad on X @jad_tarifi
Friend him on Facebook
Connect on LinkedIn
Check out Jad's new book The Rise of Superintelligence
... and the companion Freedom Series website
Transcript
Welcome to Disrupting Japan, Straight Talk from Japan's most innovative founders and VCs.
I'm Tim Romero and thanks for joining me.
Japan is lagging behind in AI, but that was not always the case. And it won't necessarily be the case in the future.
Today we sit down with Jad Tarifi, current founder of Integral AI, and previously founder of Google's first generative AI team. We talk about his decision to leave Google after over a decade of groundbreaking research to focus on what he sees as a better, faster path to AGI or artificial general intelligence. And then to super intelligence.
It's a fascinating discussion that begins very practically and gets more and more philosophical as we go on.
We talk about the key role robotics has to play in reaching AGI, how to leverage the overlooked AI development talent here in Japan, how small startups can compete against today's AI giants, and then how we can live with AI, how to keep our interest aligned.
And at the end, one important thing Elon Musk shows us about our relationship to AI. And I guarantee it's not what you, and certainly not what Elon thinks it is.
But you know, Jad tells that story much better than I can. So, let's get right to the interview.
Interview
Tim: I am sitting here with Jad Tarifi, founder of Integral AI, so thanks for sitting down with me.
Jad: Thank you.
Tim: Integral AI, you guys are “unlocking, scalable, robust general intelligence.” Now that's a pretty big claim, so let's break that down. What exactly are you guys doing?
Jad: So, when we look at generative AI models right now, they usually operate as a black box. And because they have minimal assumptions on the data, they have to do a lot of work and they tend to be inefficient in terms of the amount of data they need and the amount of compute. We're taking a different approach that's inspired by the architecture of the neocortex, which roughly speaking follows a hierarchical design where different layers produce abstractions and then feed into higher layers that create abstractions of abstractions and so on.
Tim: Okay, so this is not an LLM architecture or is this a kind of LLM architecture?
Jad: When people talk about LLM, usually they talk about auto regressive transformer networks. So this would be a different type of architecture than that. However we can use transformers or other models like diffusion models as building blocks within that overall architecture.
Tim: It's interesting that you took a different path than LLMs because you're not new to AI.