

Building Open Infrastructure for AI with Illia Polosukhin
83 snips Jul 17, 2025
Illia Polosukhin, a veteran AI researcher and co-author of the Transformer paper, discusses his journey toward open-source AI at NEAR AI. He delves into the proliferation of user-owned AI and the ethical dimensions of combining AI with blockchain technology. Polosukhin highlights the importance of decentralized marketplaces, trusted execution environments, and secure GPU inference for maintaining privacy. He also critiques conventional AI hosting models, advocating for community-driven solutions to enhance transparency and security in AI infrastructure.
AI Snips
Chapters
Books
Transcript
Episode notes
Origins of the Transformer Model
- Illia Polosukhin shared how the idea of transformers came about to speed up NLP models by processing entire text in parallel instead of word by word.
- He illustrated the challenge with slow recurrent neural networks and the motivation behind creating the Transformer architecture.
AI Governance and Safety Risks
- Illia Polosukhin emphasized the governance and safety risks in AI, including the game-theoretic incentives for labs to sabotage each other.
- He also highlighted the risk of data biases and malicious data poisoning in model training.
Concept of User-Owned AI
- Illia introduced the concept of user-owned AI that centers the AI model's meta-function on optimizing for users' privacy and success.
- This approach uses blockchain to coordinate, ensure transparency of data bias, and preserve user privacy.