Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0 cover image

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0

Latest episodes

undefined
Feb 1, 2024 • 58min

Why StackOverflow usage is down 50% — with David Hsu of Retool

David Hsu of Retool, co-founder of Retool, discusses the history of Retool, their growth, and how they got to $1M ARR with 3 employees. They delve into the incorporation of AI into internal software, building and partnering in the software development space, AI survey results, current adoption rate of AI, and analogies in AI and the concept of AGI.
undefined
Jan 25, 2024 • 1h 8min

The Four Wars of the AI Stack (Dec 2023 Audio Recap)

The Four Wars of the AI stack: data quality, GPU rich vs poor, multimodality, and RAG/Ops war. Selection process for the four wars and notable mentions. The end of low background tokens and the impact on data engineering. The Quality Data Wars; synthetic data. The GPU Rich/Poors War; Anyscale benchmark drama. Transformer alternatives and why they matter. The Multimodality Wars; Multiverse vs Metaverse. The RAG/Ops Wars; will frameworks expand up or will cloud providers expand down? Syntax to Semantics. Outer Loop vs Inner Loop.
undefined
Jan 19, 2024 • 1h 12min

How to train your own Large Multimodal Model — with Hugo Laurençon & Leo Tronchon of HuggingFace M4

Hugo Laurençon and Leo Tronchon of HuggingFace M4 discuss training large multimodal models, the challenges of working with video data, image resolution in OCR tasks, and the importance of creating deduplication rules for the industry.
undefined
Jan 11, 2024 • 1h 26min

RLHF 201 - with Nathan Lambert of AI2 and Interconnects

The podcast episode features Dr. Nathan Lambert, an expert in robotics and reinforcement learning. Topics include popular AI article topics, the evolution of preference modeling in language models, instruction tuning in RLHF, synthetic data and human labeling preferences, the release of toxicity data, transitioning to release models and AI policy, the comparison of GPT models, and retraining models and evaluation tools.
undefined
Jan 5, 2024 • 1h 4min

The Accidental AI Canvas - with Steve Ruiz of tldraw

In this episode, host interviews Steve Ruiz, founder of tldraw, discussing his transition from fine art to product design. They cover topics like the evolution of tldraw, parallel prompting techniques, OCR text extraction, viral trends, and the potential of AI-assisted virtual systems.
undefined
Dec 30, 2023 • 2h 42min

NeurIPS 2023 Recap — Top Startups

Startups featured in the podcast discuss topics such as AI models, NeurIPS conference, reverse engineering, training models, synthetic data generation, Perplexity's success, hard work and tenacity, building communities in AI, networking at conferences, large Arabic language models, Vauxhall's toolkit for AI engineers, and collaborations with Google Cloud.
undefined
Dec 23, 2023 • 3h 20min

NeurIPS 2023 Recap — Best Papers

Hosts recap the NeurIPS 2023 conference, discussing best papers and influential topics such as direct preference optimization for language models, scaling data constraint language models, developing a visual intelligent assistant, understanding bunny boxes with GPT4, and using Tool Former to improve language models. They also explore using GPT-4 to play Minecraft, evaluating cognitive capacities through diverse tasks, analyzing language models' performance in planning tasks, and the impact of foundation models on AI systems.
undefined
Dec 20, 2023 • 59min

The AI-First Graphics Editor - with Suhail Doshi of Playground AI

Suhail Doshi, Co-founder of Playground AI, discusses their image editor reimagined with AI in mind, featuring real-time preview rendering, style filtering, and prompt tuning. The podcast also explores topics like networking and AI in web browsers, AI artists and model evaluation, graphics tools for art generation, challenges in scaling and model optimization, modalities in AI, and the value of hands-on learning in AI projects.
undefined
Dec 14, 2023 • 1h 20min

The "Normsky" architecture for AI coding agents — with Beyang Liu + Steve Yegge of SourceGraph

Beyang Liu and Steve Yegge from SourceGraph discuss code indexing, retrieval interfaces, and their SOTA 30% completion acceptance rate in Cody. They talk about the history of code search, RAG, fine tuning, data privacy, DSLs and LLMs, and the challenges and potential of AI coding agents.
undefined
Dec 8, 2023 • 1h 4min

The Busy Person's Intro to Finetuning & Open Source AI - Wing Lian, Axolotl

Wing Lian, AI expert, talks about fine-tuning in open source AI models and its purposes. They discuss evaluating different AI models and the challenges of larger models. The concept of fine-tuning before or after RLHF models is explored. Importance of rules, documentation, and optimizing training data is emphasized. Axolotl's roadmap and vision for becoming a developer-first platform are discussed.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode