

Will AI Fix Compensation — or Just Make It Worse, Faster? with Aubrey Blanche-Serellano, Founder at The MathPath
Will AI Fix Compensation — or Just Make It Worse, Faster? with Aubrey Blanche-Serellano, Founder at The MathPath
Description:
Welcome back to the FNDN Series, where we continue our deep dive into startup compensation with industry leaders from across the startup world. In our conversation with Aubrey Blanche-Serellano, Founder of The MathPath and former VP at Culture Amp, we explore the transformative potential of AI in compensation—and its risks. We dive into how AI could revolutionize pay equity analysis, the dangers of embedding human bias into machine decisions, and what the future holds for compensation professionals. From automated pay audits to personalized benefit packages, discover how AI might reshape everything we know about fair pay. Keep watching to learn how to harness AI's power while avoiding its pitfalls in your compensation strategy.
Chapters:
00:00 Introduction to AI in Compensation
01:16 Guest Introduction: Aubrey Blanche-Serellano
02:37 AI's Role in Compensation: Pattern Matching and Analytics
03:52 The Dark Side: AI as a Supercharged Terrible Recruiter 04:49 Human Bias Becomes Machine Bias
07:11 The Amazon Resume Parser Cautionary Tale
08:54 AI Hallucinations and Critical Thinking
09:35 How AI Changes the Role of People Leaders
11:39 AI Literacy vs. Calculator Panic
12:07 AI-Powered Candidate Experience
14:59 Personalized Compensation Packages
17:26 Ethical Questions in Customizable Pay
18:27 What Will AI Kill in Compensation?
20:10 The Future: Automation vs. Human Expertise
Connect with Aubrey
Visit: https://themathpath.com/
https://linkedin.com/in/aubreyblanche
Resources Mentioned
Companies/Tools Mentioned:
Culture Amp - People and culture platform where Aubrey achieved remarkable pay equity results
Amazon - Example of biased AI resume parsing system
Textio - Kieran Snyder's company focusing on inclusive language
The MathPath - Aubrey's consultancy for equitable people practices and responsible AI
Concepts Discussed:
RAG Models (Retrieval-Augmented Generation)
Responsible AI Framework
Pay Equity Analysis Tools
AI Hallucination in Large Language Models
TI-83 Calculator Analogy for AI Literacy
More FNDN Episodes:
Spotify: https://open.spotify.com/show/4GeBIeZOKrFxG1oiiPxmiM
Apple Podcast: https://podcasts.apple.com/us/podcast/fndn-series/id1794263484