AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Value of Formal Verification
"I want this to be like one tool that everyone has access to, like writing unit tests. And I'd like it to be no harder than writing unit tests," he says. "There's a lot of software where there can be really catastrophic like errors that can really harm like people."
In episode 74 of The Gradient Podcast, Daniel Bashir speaks to Professor Talia Ringer.
Professor Ringer is an Assistant Professor with the Programming Languages, Formal Methods, and Software Engineering group at the University of Illinois at Urbana Champaign. Their research leverages proof engineering to allow programmers to more easily build formally verified software systems.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Daniel’s long annoying intro
* (02:15) Origin Story
* (04:30) Why / when formal verification is important
* (06:40) Concerns about ChatGPT/AutoGPT et al failures, systems for accountability
* (08:20) Difficulties in making formal verification accessible
* (11:45) Tactics and interactive theorem provers, interface issues
* (13:25) How Prof Ringer’s research first crossed paths with ML
* (16:00) Concrete problems in proof automation
* (16:15) How ML can help people verifying software systems
* (20:05) Using LLMs for understanding / reasoning about code
* (23:05) Going from tests / formal properties to code
* (31:30) Is deep learning the right paradigm for dealing with relations for theorem proving?
* (36:50) Architectural innovations, neuro-symbolic systems
* (40:00) Hazy definitions in ML
* (41:50) Baldur: Proof Generation & Repair with LLMs
* (45:55) In-context learning’s effectiveness for LLM-based theorem proving
* (47:12) LLMs without fine-tuning for proofs
* (48:45) Something ~ surprising ~ about Baldur results (maybe clickbait or maybe not)
* (49:32) Asking models to construct proofs with restrictions, translating proofs to formal proofs
* (52:07) Methods of proofs and relative difficulties
* (57:45) Verifying / providing formal guarantees on ML systems
* (1:01:15) Verifying input-output behavior and basic considerations, nature of guarantees
* (1:05:20) Certified/verifies systems vs certifying/verifying systems—getting LLMs to spit out proofs along with code
* (1:07:15) Interpretability and how much model internals matter, RLHF, mechanistic interpretability
* (1:13:50) Levels of verification for deploying ML systems, HCI problems
* (1:17:30) People (Talia) actually use Bard
* (1:20:00) Dual-use and “correct behavior”
* (1:24:30) Good uses of jailbreaking
* (1:26:30) Talia’s views on evil AI / AI safety concerns
* (1:32:00) Issues with talking about “intelligence,” assumptions about what “general intelligence” means
* (1:34:20) Difficulty in having grounded conversations about capabilities, transparency
* (1:39:20) Great quotation to steal for your next thinkpiece + intelligence as socially defined
* (1:42:45) Exciting research directions
* (1:44:48) Outro
Links:
* Talia’s Twitter and homepage
* Research
* Concrete Problems in Proof Automation
* Baldur: Whole-Proof Generation and Repair with LLMs
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode