

Thinking on Paper: Technology Moves Fast, Think Slower
Thinking On Paper
We help parents and curious minds understand the impact of technology on life, work, and family.
Every Thursday we talk with CEOs, founders, scientists, and outliers. From Big Tech giants like IBM , to early-stage innovators and Silicon Valley startups.
Mondays, the Book Club breaks down the most important technology books of the moment.
Clear. Curious. Critical.
Every Thursday we talk with CEOs, founders, scientists, and outliers. From Big Tech giants like IBM , to early-stage innovators and Silicon Valley startups.
Mondays, the Book Club breaks down the most important technology books of the moment.
Clear. Curious. Critical.
Episodes
Mentioned books

Sep 18, 2025 • 8min
Space-Based Solar Power Explained in 5 Minutes │ Former Solar Lead and Space Energy Insights CEO Sanjay Vijendran
Space-based solar power: what it is, why it wasn’t viable for decades, and what’s changed?ESA’s Former Solar Lead and Space Energy Insights CEO Sanjay Vijendran explains how power beaming works, what’s been proven, and the engineering still to solve.What you’ll learn (in 5 minutes)🛰️ Why the idea stalled for 50+ years and why falling launch/assembly costs now matter. 🛰️ How wireless power transmission actually works (no cable, no new physics) and what’s been demonstrated since the 1960s. 🛰️ A real test: 2 kW beamed across 36 m in 2022, used to light a model city, run electrolysis, and even cool beers, within safety limits. 🛰️ Near-term vs. long-term uses: megawatt delivery to remote sites vs. gigawatt-scale plants that could power cities. 🛰️ The big hurdle: scaling antennas/rectennas and building kilometer-scale modular arrays assembled by robots in orbit. Please enjoy the show. And share with a curious friend. Stay disruptive, be curious, keep Thinking On Paper.📺 Watch the full show: https://www.youtube.com/watch?v=53c08ygOFyc&t=1074s--Timestamps(00:00) Why Energy Poverty Still Matters(01:26) How Beaming Power Actually Works(04:09) The Big Problem: Scaling It Up(04:56) Can It Ever Be Affordable?(07:19) Building Solar Farms in Space--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Sep 18, 2025 • 47min
Don Norman’s Last Warning: Design Built This Mess. Can It Save Us? │ REMASTERED
At 88, Don Norman, the godfather of design, issues his final warning: the same mindset that gave us convenience also gave us climate collapse, inequality, and fragile institutions. Design isn’t decoration. It’s power. It built the products we use, the systems we depend on, and the crises that now threaten us.“Human-centered” design sounds good, but it isn’t enough. Norman argues it has blinded us to bigger responsibilities , ecosystems, culture, and the generations who will inherit our mistakes. We need Humanity Centered Design.In this conversation Don Norman Thinks on Paper with Mark and Jeremy about:Has human-centered design failed?Why are climate summits designed to fail before they begin?How did STEM education strip out wisdom?Can empathy ever be built into systems at scale?Can humanity centered design help us survive, or will it keep driving us toward collapse?Please enjoy the interview with Don Norman.--Timestamps(00:09) Why Design Shapes the World We Live In(00:37) How Design Shapes Human Behavior (Often Without Us Noticing)(06:00) Why Most Solutions Don’t Matter — and What Real Design Should Do(09:10) Humanity-Centered Design: What It Really Means(22:16) Can Design Help Us Avoid Collapse?(26:51) Why Communities Hold the Answers, Not Just Experts(28:49) The Spark That Starts Humanity-Centered Design(30:18) How Young Designers Can Change the Future(33:16) Working Together Across Borders(35:39) Measuring What Matters, Not Just What’s Easy(37:06) Why Empathy Can’t Be an Afterthought(42:05) Thinking Beyond the Next Quarter — Business for the Long Term(45:02) Rethinking Education for the Next Generation(46:43) The Hard Questions We Still Need to Answer--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Sep 16, 2025 • 26min
Tired of AI Billionaires Screwing Up the World? You Won’t Like This - Empire of AI, Karen Hao: Book Review, Part 2
Empire of AI by Karen Hao exposes the hidden workers behind ChatGPT. Content moderators in Kenya, Venezuela, and Colombia paid pennies to train OpenAI’s models.In this episode, Mark gets spicy, Jeremy gets angry and the world wakes up to the human cost of training ChatGPT. It's an Empire of AI book review, but not as you know it. 📖 The hidden labor that trains ChatGPT and other large language models 📖 How Big Tech silences critical voices while racing ahead with AI 📖 Why “ethical AI” often ignores the people actually building it 📖 What Empire of AI reveals about the future of humanity and power And please, stay disruptive, stay curious, keep thinking on paper.Peace and love. Forever. Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz--🕰️ TIMESTAMPS(00:00) Trailer(02:00) Introduction to Empire of AI & Karen Hao(03:41)Shifting power dynamics in Silicon Valley(03:59) Karen Hao’s warnings in Empire of AI(04:56) Humanity V the relentless race for scale(06:32) The environmental impact of AI systems(07:38) Stochastic parrots: Silencing Critics(09:48) Sam Altman Loves A Military Quote(10:53) What Cost Humanity?(15:14) The global race for AI advancement(18:32) The hidden labor behind ChatGPT(25:07) The ethical dilemma at the heart of AI development

Sep 13, 2025 • 8min
IBM’s Quantum Crash Course: Why Today’s Computers Fail │ Short Thoughts #3
Quantum computers are noisy and unstable. Even simple operations are error-prone, around 1 in every 1,000 goes wrong.How do we get from here to quantum advantage, the computing promised land when quantum systems outperform classical machines at every task, solve the climate crisis, invent new materials, cure disease and send humanity skipping into the future with hope, optimism and AI that behaves itself?In this short episode, Oliver Dial, CTO of IBM Quantum, explains why today’s quantum computers make mistakes, what error correction really means, and how IBM’s roadmap could deliver fault-tolerant quantum computing by 2029. He also shares why chemistry and materials science may be the first fields transformed by quantum breakthroughs.Please enjoy the show. And share with your most curious friend. --Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz📺 Watch the show on ourdedicatedd YouTube Channel

Sep 11, 2025 • 35min
Personal AI: Who Owns the Version of You That Lives Forever?│ Rob LoCascio
What if you could use your own AI to keep speaking to your loved ones, even after they’re gone (and you’re dead)?Rob LoCascio is the entrepreneur who invented online chatbots. Now he’s building something more ambitious: personal AI designed to preserve your voice, values, and wisdom so your family can keep talking to you forever.In this episode of Thinking on Paper, sit down with Mark and Jeremy and learn:Why old chatbots are dead and what comes next in artificial intelligenceHow personal AI, digital immortality, and AI afterlife technology could change the way families remember usWhy data ownership will decide whether this future helps or harmsThe role of technology in preserving human legacy and identityIt’s a story of technology. It’s a story of legacy. It's a story of artificial intelligence. And it raises the biggest question of all: who controls the version of you that lives on? After listening to the show, you'll be asking yourself: would you do it? Please enjoy the show. And share with a curious friend. Be disruptive, stay curious. Keep Thinking On Paper.--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz--Chapters:(00:00) The future of AI starts here(02:11) How AI is changing human connection forever(05:55) Where AI meets humanity(11:54) The story that sparked personal AI(19:50) Why you must own your AI before it owns you(20:10) The hidden vault of your data(22:31) Why voice is the next big interface(25:11) How AI will slip into daily life?(25:36) Can personal AI be monetized?(27:14) The fight to regulate AI(27:52) What AI means for being human(29:46) Will your knowledge outlive you?(32:05) How to build your personal AI identity(33:28) Writing the story of your life with AI--Peace and Love. Always. Mark & Jeremy

Sep 8, 2025 • 27min
Seemingly Conscious AI Is Coming. Here’s Why That’s Dangerous │ Mustafa Suleyman, Microsoft
Mustafa Suleyman, CEO of Microsoft AI, warns of Seemingly Conscious AI (SCAI), AIs that imitate memory, empathy, and selfhood so convincingly that people begin to believe.In this conversation, we explore the dangers of illusion vs reality, Adam Raine’s chatbot story, and what happens when AI manipulates trust at the deepest level.If AI can perform consciousness, does it matter if it’s real?Please enjoy the show. And share with a curious friend.--Timestamps(00:00) Teaser(01:17) Adam Raine(01:28) Who Is Mustafa Suleyman?(02:36) The Run Up To Superintelligence(03:57) What Is Seemingly Conscious AI?(05:04) Philosophical Zombies (06:14) ChatGPT Is Just A Word Predictor(07:01) What Does It Take To Build A Seemingly Conscious AI?(08:08) The Illusion Of Conscious AI(09:59) How Different Are You To An AI?(11:39) Repeating The Covid Dynamic(13:27) OpenAI's Response To Adam Raine(15:02) The Dystopian Seemingly Conscious Timeline(18:18) Generation Text-Over-Talk(18:52) The Utopian Seemingly Conscious AI Timeline(21:22) AI Guardrails(23:43) Adam Raine Chat Log(26:18) Thinking On Paper(27:01) We Should Build AI For People, Not To Be A Person--LINKS:- Mustafa Suleyman Essay- Mustafa Suleyman X--Other ways to connect with Thinking On Paper:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Sep 7, 2025 • 6min
Kevin Kelly: Your Next Pet Could Be An AI │Short Thoughts #2
Futurist and Wired founder Kevin Kelly believes the arrival of emotional AI will 'make people go bananas'.Kelly explains how coding emotions into AI will change how we bond with machines in ways more powerful than search or automation alone. He shares insights on why we’ll treat AIs like pets, how consciousness exists on a spectrum, and why he calls them “artificial aliens” more like Spock than human.Discover Kevin's Thoughts On:-Why technology and nature are part of the same evolution-How to increase your awe and wonder-Google’s true mission to build AI-Emotional AI and why we’ll bond with it like pets-Consciousness as a spectrum — from dogs to AIs-Why he calls AIs “artificial aliens” like Spock-Practical wisdom from Excellent Advice for Living📺 Watch the full episode on our YouTube channel: Subscribe to our channel for more interviews like this.#kevinkelly #techinterviews

Sep 6, 2025 • 48sec
The Thinking On Paper Trailer
Mark Fielding and Jeremy Gilbertson sit down with CEOs, founders, scientists, and cultural thinkers to ask the hard questions about AI, quantum, and Web3. How is technology reshaping work, culture, and what it means to be human?They created Thinking On Paper to slow the pace, map connections across technologies and battle the noise. When input equals output, how you curate is everything. Long-form interviews every Thursday.Book Club every Monday.Clear. Curious. Critical.

Sep 4, 2025 • 36min
The Climate Crisis Microsoft Won’t Talk About │ Enabled Emissions
Microsoft says it’s going green. But insiders reveal its AI is powering Big Oil, making fossil fuel extraction faster, cheaper, and bigger than ever.Microsoft pledged to remove 5 million metric tons of carbon over 15 years. Yet its AI contracts with Exxon and Chevron could add 51 million metric tons every year, 3X its annual footprint, and more than 10x what it promised to cut.While most debates focus on data centers and electricity use, the hidden story is bigger: AI and fossil fuels are now deeply linked, with consequences for emissions, the climate crisis, and the energy transition.In this episode of Thinking on Paper, former Microsoft sustainability leaders Holly and Will Alpine — now founders of Enabled Emissions — explain how AI has become essential to oil and gas companies, extending the life of reserves that should be shrinking.This isn’t the future we were promised. And it’s one we can’t afford to ignore.Please enjoy the show. And share with a curious friend.--LINKS & RESOURCES- Enabled Emissions- Microsoft's Commitment to Sustainability - Exxon & Microsoft partnership press release- Microsoft Net Zero--Stats on AI and Oil Production:🛢️ US oil production: Surged from 5.1 million barrels per day in 2007 to 13.5 million today, largely due to AI-driven extraction.🛢️ Permian Basin output: Daily oil production tripled in the past decade even as rig counts dropped 46%.🛢️ Microsoft’s role: Just two AI deals (Exxon + Chevron) could add 51 million metric tons of CO₂ annually—over 300% of Microsoft’s total FY23 emissions.🛢️ Barrel math: Burning one barrel of oil releases 433 kg of CO₂, and 81% of each barrel is burned.🛢️ Fossil fuels account for ~90% of global emissions, and AI is being applied across every stage of their lifecycle.--Quotes From The Show.“It’s not dramatic to call the impacts of AI right now an existential threat.” 🏭 “AI has transformed oil operations that should be aging out—keeping fossil fuels alive in an era of cheap renewables.” 🏭 “The sustainability movement is running on a treadmill, and AI is turning the knob faster the harder we run.” 🏭 “You can’t call yourself a sustainability leader when you’re helping the largest oil companies on the planet dramatically increase emissions.” 🏭 “Over 100 years, fossil fuels have stayed at 80% of the global energy mix. Despite record renewables, it’s an addition, not a transition.” --Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz--Timestamps(00:00) The Hidden Climate Cost of AI(01:44) Why Experts Call AI an Existential Threat(03:34) How Big Oil Uses AI to Pump More Fossil Fuels(07:46) Why Two Microsoft Insiders Started Enabled Emissions(11:14) Inside AI’s Growing Role in the Energy Sector(13:08) How much CO₂ comes from burning oil, and what does AI add?(16:17) The Guardrails Needed to Stop AI From Fueling Emissions(19:34) Microsoft’s Energy Principles: Policy or PR?(21:58) What are Scope 1, 2, and 3 emissions — and why do they matter?(24:26) How does Big Tech’s AI partnership with Big Oil affect Net Zero?(29:55) Why do we need international policy to regulate AI in energy?(32:39) AI for Good vs. AI for Fossil Fuels(34:14) What should humans be?--If you would like to sponsor Thinking On Paper, please contact us. Together, we can take the show to the next level.We love you all.We love the planet.Stay curious.Keep Thinking On Paper.

Sep 1, 2025 • 32min
The Empire of AI: Sam Altman’s Rise and the Battle for Power - Part 1
OpenAI was founded to build AI “for the good of humanity.” But behind the mission statement lies a story of money, power, and control.In this Book Club, we read part 1 of Empire of AI by Karen Hao — a book some are already calling the most important of the decade. From Sam Altman’s rise in Silicon Valley to Elon Musk’s early power struggle, from Microsoft’s billion-dollar lifeline to the boardroom coup that almost ended Sam Altman's role as CEO at OpenAI, this is the making of an empire.In Part One, Mark and Jeremy Think on Paper about:📒 How Sam Altman became Silicon Valley’s Michael Corleone📒 Why empires always hide collateral damage📒 The myths and marketing that disguise AI’s true purpose📒 The role of Microsoft and Bill Gates in shaping OpenAI’s future📒 Why “for the good of humanity” became an afterthoughtEmpires don’t last forever. But while they rise, the costs are enormous.Please enjoy the show. Stay disruptive. Be curious, Keep Thinking on Paper.--Chapters(00:00) Introduction to Empire of AI(01:54) The Empire Strikes Back(05:13) Karen Hao, The Journalist(07:38) Do You Trust Open AI?(10:18) Why OpenAI Made ChatGPT(11:47) Scaling OpenAI(12:33) Google, Deep Mind and Ai For humanity(15:12) Greg Brockman(17:02) Sam Altman's Personal Brand 24:46 Timnit Gebru(25:25) How does AI benefit humanity?--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyzWatch the book club on our dedicated YouTube channel: https://youtu.be/OfQu65-6GuA