
How AI startups can compete with the AI giants
Disrupting Japan
 00:00 
Innovating AI: Neocortex-Inspired Approaches
This chapter explores a startup's novel methods in generative AI, inspired by the structure of the neocortex, aiming to enhance efficiency and real-world applications. It discusses the evolution of generative AI, addressing challenges in computational needs and ethical data use while highlighting the potential for self-improvement in AI models.
 Play episode from 01:52 
 Transcript 
Transcript
Episode notes
Japan is lagging behind in AI, but that might not be the case for long.
Today we sit down with Jad Tarifi, current founder of Integral AI and previously, founder of Google’s first Generative AI team, and we talk about some of Japan's potential advantages in AI, the most likely path to AGI, and how small AI startups can compete against the over-funded AI giants.
It's a great conversation, and I think you'll enjoy it.
Show Notes
 	
Why Jad felt Google was not pursuing the best path toward AGI
 	The fundamental AI scaling problem and likely solutions
 	Why robotics is critical for the advancement of AI (and the not the other way around)
 	Why Japan is the ideal place to build a new AI startup
 	The reason it is so difficult for robotics startups to make money
 	Why humanoid robots are a dead-end
 	How AI startups can compete with the foundation-model comnpanies
 	How we get to AGI from our current AI
 	Solutions to the alignment problem
 	The challenge of making AI fundamentally benevolent
 	The biggest challenge in AI development is not technological
Links from our Guest
 	Everything you ever wanted to know about Integral AI
 	Stream product announcement
 	Follow Jad on X  @jad_tarifi
 	Friend him on Facebook
 	Connect on LinkedIn
 	Check out Jad's new book The Rise of Superintelligence
 	... and the companion Freedom Series website
Transcript
Welcome to Disrupting Japan, Straight Talk from Japan's most innovative founders and VCs.
I'm Tim Romero and thanks for joining me.
Japan is lagging behind in AI, but that was not always the case. And it won't necessarily be the case in the future.
Today we sit down with Jad Tarifi, current founder of Integral AI, and previously founder of Google's first generative AI team. We talk about his decision to leave Google after over a decade of groundbreaking research to focus on what he sees as a better, faster path to AGI or artificial general intelligence. And then to super intelligence.
It's a fascinating discussion that begins very practically and gets more and more philosophical as we go on.
We talk about the key role robotics has to play in reaching AGI, how to leverage the overlooked AI development talent here in Japan, how small startups can compete against today's AI giants, and then how we can live with AI, how to keep our interest aligned.
And at the end, one important thing Elon Musk shows us about our relationship to AI. And I guarantee it's not what you, and certainly not what Elon thinks it is.
But you know, Jad tells that story much better than I can. So, let's get right to the interview.
Interview
Tim: I am sitting here with Jad Tarifi, founder of Integral AI, so thanks for sitting down with me.
Jad: Thank you.
Tim: Integral AI, you guys are “unlocking, scalable, robust general intelligence.” Now that's a pretty big claim, so let's break that down. What exactly are you guys doing?
Jad: So, when we look at generative AI models right now, they usually operate as a black box. And because they have minimal assumptions on the data, they have to do a lot of work and they tend to be inefficient in terms of the amount of data they need and the amount of compute. We're taking a different approach that's inspired by the architecture of the neocortex, which roughly speaking follows a hierarchical design where different layers produce abstractions and then feed into higher layers that create abstractions of abstractions and so on.
Tim: Okay, so this is not an LLM architecture or is this a kind of LLM architecture?
Jad: When people talk about LLM, usually they talk about auto regressive transformer networks. So this would be a different type of architecture than that. However we can use transformers or other models like diffusion models as building blocks within that overall architecture.
Tim: It's interesting that you took a different path than LLMs because you're not new to AI.
The AI-powered Podcast Player
 Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more! 


