We've spent decades teaching ourselves to communicate with computers via text and clicks. Now, computers are learning to perceive the world like us: through sight and sound. What happens when software needs to sense, interpret, and act in real-time using voice and vision?
This week, Andrew sits down with Russ d'Sa, Co-founder and CEO of LiveKit, whose technology acts as the crucial infrastructure enabling machines to interact using real-time voice and vision, impacting everything from ChatGPT to critical 911 responses.
Explore the transition from text-based protocols to rich, real-time data streams. Russ discusses LiveKit's role in this evolution, the profound implications of AI gaining sensory input, the trajectory from co-pilots to agents, and the unique hurdles engineers face when building for a world beyond simple text transfers.
Check out:
Follow the hosts:
Follow today's guest(s):
Referenced in today's show:
OFFERS
- Start Free Trial: Get started with LinearB's AI productivity platform for free.
- Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.
LEARN ABOUT LINEARB
- AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
- AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
- AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
- MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.