The AI Fundamentalists

Dr. Andrew Clark & Sid Mangalik
undefined
Jul 22, 2025 • 37min

AI governance: Building smarter AI agents from the fundamentals, part 4

Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes:• Agentic AI systems require governance at every step: perception, reasoning, action, and learning• Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps• Two-way information flow creates new security and confidentiality vulnerabilities. For example, targeted prompting to improve awareness comes at the cost of performance. (arXiv, May 24, 2025)• Traditional governance approaches are insufficient for the complexity of agentic systems• Organizations must implement granular monitoring, logging, and validation for each component• Human-in-the-loop oversight is not a substitute for robust governance frameworks• The true cost of agentic systems includes governance overhead, monitoring tools, and human expertiseMake sure you check out Part 1: Mechanism design, Part 2: Utility functions, and Part 3: Linear programming. If you're building agentic AI systems, we'd love to hear your questions and experiences. Contact us.What we're reading:We took reading "break" this episode to celebrate Sid! This month, he successfully defended his Ph.D. Thesis on "Psychological Health and Belief Measurement at Scale Through Language." Say congrats!>>What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
undefined
Jul 8, 2025 • 30min

Linear programming: Building smarter AI agents from the fundamentals, part 3

We continue with our series about building agentic AI systems from the ground up and for desired accuracy.  In this episode, we explore linear programming and optimization methods that enable reliable decision-making within constraints. Show notes:Linear programming allows us to solve problems with multiple constraints, like finding optimal flights that meet budget requirementsThe Lagrange multiplier method helps find optimal solutions within constraints by reformulating utility functionsCombinatorial optimization handles discrete choices like selecting specific flights rather than continuous variablesDynamic programming techniques break complex problems into manageable subproblems to find solutions efficientlyMixed integer programming combines continuous variables (like budget) with discrete choices (like flights)Neurosymbolic approaches potentially offer conversational interfaces with the reliability of mathematical solversUnlike pattern-matching LLMs, mathematical optimization guarantees solutions that respect user constraintsMake sure you check out Part 1: Mechanism design and Part 2: Utility functions. In the next episode, we'll pull all of the components from these three episodes to demonstrate a complete travel agent AI implementation with code examples and governance considerations.What we're reading:Burn Book - Kara Swisher, March 2025Signal and the Noise - Nate Silver, 2012Leadership in Turbulent Times - Doris Kearns GoodwinWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
undefined
Jun 12, 2025 • 42min

Utility functions: Building smarter AI agents from the fundamentals, part 2

The hosts look at utility functions as the mathematical basis for making AI systems. They use the example of a travel agent that doesn’t get tired and can be increased indefinitely to meet increasing customer demand. They also discuss the difference between this structured, economic-based approach with the problems of using large language models for multi-step tasks.This episode is part 2 of our series about building smarter AI agents from the fundamentals. Listen to Part 1 about mechanism design HERE.Show notes:• Discussing the current AI landscape where companies are discovering implementation is harder than anticipated• Introducing the travel agent use case requiring ingestion, reasoning, execution, and feedback capabilities• Explaining why LLMs aren't designed for optimization tasks despite their conversational abilities• Breaking down utility functions from economic theory as a way to quantify user preferences• Exploring concepts like indifference curves and marginal rates of substitution for preference modeling• Examining four cases of utility relationships: independent goods, substitutes, complements, and diminishing returns• Highlighting how mathematical optimization provides explainability and guarantees that LLMs cannot• Setting up for future episodes that will detail the technical implementation of utility-based agentsSubscribe so that you don't miss the next episode. In part 3, Andrew and Sid will explain linear programming and other optimization techniques to build upon these utility functions and create truly personalized travel experiences.What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
undefined
May 20, 2025 • 37min

Mechanism design: Building smarter AI agents from the fundamentals, Part 1

What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design.This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handle complex multi-step processes like booking travel, Sid and Andrew explore how to design the rules of the game so that even self-interested agents produce optimal outcomes.Drawing from our conversation with Dr. Michael Zarham (Episode 32), we break down why LLM-based agents struggle with transparency and governance. The "surface area" for errors expands dramatically when you can't explain how decisions are made across multiple steps. Instead, mechanism design creates clear states with defined optimization parameters at each stage—making the entire system more reliable and accountable.We explore the famous Prisoner's Dilemma to illustrate how individual incentives can work against collective benefits without proper system design. Then we introduce the Vickrey-Clark-Groves mechanism, which ensures AI agents truthfully reveal preferences and actively participate in multi-step processes—critical properties for enterprise applications.Beyond technical advantages, this approach offers something profound: a way to preserve humanity in increasingly automated systems. By explicitly designing for values, fairness, and social welfare, we're not just building better agents—we're ensuring AI serves human needs rather than replacing human thought.Subscribe now to follow our journey as we build an agentic travel system from first principles, applying these concepts to real business challenges. Have questions about mechanism design for AI? Send them our way for future episodes!What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
undefined
May 8, 2025 • 46min

Principles, agents, and the chain of accountability in AI systems

Dr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.Show highlights• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions• LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components• Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds")• Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development• Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control• The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior• Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standardsExplore Dr. Zargham's workProtocols and Institutions (Feb 27, 2025)Comments Submitted by BlockScience, University of Washington APL Information Risk and Synthetic Intelligence Research Initiative (IRSIRI), Cognitive Security and Education Forum (COGSEC), and the Active Inference Institute (AII) to the Networking and Information Technology Research and Development National Coordination Office's Request for Comment on The Creation of a National Digital Twins R&D Strategic Plan NITRD-2024-13379 (Aug 8, 2024)What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
undefined
Mar 27, 2025 • 42min

Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2

Join Christoph Molnar and Timo Freiesleben, co-authors of 'Supervised Machine Learning for Science,' as they dive deep into practical machine learning applications in research. They discuss the significance of tailoring evaluation metrics to enhance model performance and the pivotal role of domain knowledge in data collection. The duo also highlights strategies for measuring causality and improving robustness against distribution shifts. Finally, they tackle the challenges of reproducibility in science versus machine learning, offering insightful solutions.
undefined
Mar 25, 2025 • 27min

Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1

Christoph Molnar, an expert in supervised machine learning, and Timo Freiesleben, a postdoctoral researcher in AI ethics, explore the intersection of machine learning and science. They discuss the skepticism scientists have towards predictive models and highlight the balance between accuracy and interpretability. The duo addresses the diverse levels of machine learning adoption across various scientific fields and the importance of domain knowledge. They also touch on how ML can enable scientists to test hypotheses and potentially discover new scientific laws.
undefined
Feb 25, 2025 • 34min

The future of AI: Exploring modeling paradigms

Unlock the secrets to AI's modeling paradigms. We emphasize the importance of modeling practices, how they interact, and how they should be considered in relation to each other before you act. Using the right tool for the right job is key. We hope you enjoy these examples of where the greatest AI and machine learning techniques exist in your routine today.More AI agent disruptors (0:56)Proxy from London start-up Convergence AIAnother hit to OpenAI, this product is available for free, unlike OpenAI’s Operator. AI Paris Summit - What's next for regulation? (4:40)[Vice President] Vance tells Europeans that heavy regulation can kill AIUS federal administration withdrawing from the previous trend of sweeping big tech regulation on modeling systems.The EU is pushing to reduce bureaucracy but not regulatory pressureModeling paradigms explained (10:33)As companies look for an edge in high-stakes computations, we’ve seen best-in-class rediscovering expert system-based techniques that, with modern computing power, are breathing new light into them. Paradigm 1: Agents (11:23)Paradigm 2: Generative (14:26)Paradigm 3: Mathematical optimization (regression) (18:33)Paradigm 4: Predictive (classification) (23:19)Paradigm 5: Control theory (24:37)The right modeling paradigm for the job? (28:05)What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
undefined
9 snips
Feb 1, 2025 • 30min

Agentic AI: Here we go again

Agentic AI is the latest foray into big-bet promises for businesses and society at large. While promising autonomy and efficiency, AI agents raise fundamental questions about their accuracy, governance, and the potential pitfalls of over-reliance on automation. Does this story sound vaguely familiar? Hold that thought. This discussion about the over-under of certain promises is for you.Show NotesThe economics of LLMs and DeepSeek R1 (00:00:03)Reviewing recent developments in AI technologies and their implications Discussing the impact of DeepSeek’s R1 model on the AI landscape, NVIDIA The origins of agentic AI (00:07:12)Status quo of AI models to date: Is big tech backing away from promise of generative AI?Agentic AI designed to perceive, reason, act, and learnGovernance and agentic AI (00:13:12)Examining the tension between cost efficiency and performance risks [LangChain State of AI Agents Report]Highlighting governance concerns related to AI agents Issues with agentic AI implementation (00:21:01)Considering the limitations of AI agents and their adoption in the workplace Analyzing real-world experiments with AI agent technologies, like Devin What's next for complex and agentic AI systems (00:29:27)Offering insights on the cautious integration of these systems in business practicesEncouraging a thoughtful approach to leveraging AI capabilities for measurable outcomesWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
undefined
Jan 7, 2025 • 33min

Contextual integrity and differential privacy: Theory vs. application with Sebastian Benthall

What if privacy could be as dynamic and socially aware as the communities it aims to protect? Sebastian Benthall, a senior research fellow from NYU’s Information Law Institute, shows us how privacy is complex. He uses Helen Nissenbaum’s work with contextual integrity and concepts in differential privacy to explain the complexity of privacy. Our talk explains how privacy is not just about protecting data but also about following social rules in different situations, from healthcare to education. These rules can change privacy regulations in big ways.Show notesIntro: Sebastian Benthall (0:03)Research: Designing Fiduciary Artificial Intelligence (Benthall, Shekman)Integrating Differential Privacy and Contextual Integrity (Benthall, Cummings)Exploring differential privacy and contextual integrity (1:05)Discussion about the origins of each subjectHow are differential privacy and contextual integrity used to enforce each other?Accepted context or legitimate context? (9:33)Does context develop from what society accepts over time?Approaches to determine situational context and legitimacyNext steps in contextual integrity (13:35)Is privacy as we know it ending?Areas where integrated differential privacy and contextual integrity can help (Cummings)Interpretations of differential privacy (14:30)Not a silver bulletNew questions posed from NIST about its applicationPrivacy determined by social norms (20:25)Game theory and its potential for understanding social normsAgents and governance: what will ultimately decide privacy? (25:27)Voluntary disclosures and the biases it can present towards groups that are least concerned with privacyAvoiding self-fulfilling prophecy from data and contextWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app