Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

What’s the Magic Word? A Control Theory of LLM Prompting.

Jun 5, 2024
01:17:07
Snipd AI
Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto discuss their groundbreaking paper on controlling language models. They explore how prompts can significantly influence model outputs, highlighting the importance of prompt engineering. Their work suggests that control theory concepts could lead to more reliable and capable language models.
Read more

Podcast summary created with Snipd AI

Quick takeaways

  • Prompt engineering significantly impacts Language Model outputs.
  • Applying control theory to LLMs enhances model reliability.

Deep dives

Adversarial Inputs and Adversarial Examples in Language Models

Adversarial inputs for humans differ from those for Language Models (LLMs), suggesting a divergence in what appears meaningful. This observation stems from deep theorization treating LLMs as dynamical systems using control theory, revealing the vast reachability space. Despite misconceptions on reduced reachability through techniques like fine-tuning, studies show the reachability space is more extensive than anticipated, manifesting through chaotic adversarial prompts.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode