Vasek Mlejnsky from E2B joins us today to talk about sandboxes for AI agents. In the last 2 years, E2B has grown from a handful of developers building on it to being used by ~50% of the Fortune 500 and generating millions of sandboxes each week for their customers. As the “death of chat completions” approaches, LLMs workflows and agents are relying more and more on tool usage and multi-modality.
The most common use cases for their sandboxes:
- Run data analysis and charting (like Perplexity)
- Execute arbitrary code generated by the model (like Manus does)
- Running evals on code generation (see LMArena Web)
- Doing reinforcement learning for code capabilities (like HuggingFace)
Full Video Episode
Timestamps
00:00:00 Introductions00:00:37 Origin of DevBook -> E2B00:02:35 Early Experiments with GPT-3.5 and Building AI Agents00:05:19 Building an Agent Cloud00:07:27 Challenges of Building with Early LLMs00:10:35 E2B Use Cases00:13:52 E2B Growth vs Models Capabilities00:15:03 The LLM Operating System (LLMOS) Landscape00:20:12 Breakdown of JavaScript vs Python Usage on E2B00:21:50 AI VMs vs Traditional Cloud00:26:28 Technical Specifications of E2B Sandboxes00:29:43 Usage-based billing infrastructure00:34:08 Pricing AI on Value Delivered vs Token Usage00:36:24 Forking, Checkpoints, and Parallel Execution in Sandboxes00:39:18 Future Plans for Toolkit and Higher-Level Agent Frameworks00:42:35 Limitations of Chat-Based Interfaces and the Future of Agents00:44:00 MCPs and Remote Agent Capabilities00:49:22 LLMs.txt, scrapers, and bad AI bots00:53:00 Manus and Computer Use on E2B00:55:03 E2B for RL with Hugging Face00:56:58 E2B for Agent Evaluation on LMArena00:58:12 Long-Term Vision: E2B as Full Lifecycle Infrastructure for LLMs01:00:45 Future Plans for Hosting and Deployment of LLM-Generated Apps01:01:15 Why E2B Moved to San Francisco01:05:49 Open Roles and Hiring Plans at E2B
Get full access to Latent.Space at
www.latent.space/subscribe