

Thinking On Paper Technology Podcast
The Human Story of Technology, Mark Fielding and Jeremy Gilbertson
Thinking on Paper helps you understand what technology is really doing to business, culture, family and society. Through direct conversations with CEOS, Founders and Outliers, we break down how systems work, where human incentives distort them, and what the headlines skim over.
If a technology shapes the world - AI, quantum computing, digital identity, gameplay engines, surveillance, regulation, energy, space manufacturing - it’s on Thinking On Paper.
Guests: IBM, D-Wave, Coinbase, Kevin Kelly and more.
Just add curiosity.
If a technology shapes the world - AI, quantum computing, digital identity, gameplay engines, surveillance, regulation, energy, space manufacturing - it’s on Thinking On Paper.
Guests: IBM, D-Wave, Coinbase, Kevin Kelly and more.
Just add curiosity.
Episodes
Mentioned books

Sep 8, 2025 • 27min
Seemingly CONSCIOUS AI | Mustafa Suleyman's AI Zombies & The Dawn Of The Dead
Seemingly conscious AI is a real threat. The AI Zombies are coming and you're not ready.A man takes his own life after months of talking to a chatbot. Mustafa Suleyman, the CEO of Microsoft AI warns that seemingly conscious ai is coming.In this Thinking on Paper Pocket Edition, Mark Fielding and Jeremy Gilbertson Think On Paper about Mustafa Suleyman’s essay “Seemingly Conscious AI” and what happens when artificial intelligence begins to act alive.They explore Suleyman’s warning that these systems could trigger AI psychosis, emotional dependency, and misplaced empathy and the larger question of how humans will tell the difference between connection and code.The conversation touches on philosophical zombies, consciousness, guardrails, and the story of Adam Raines, whose death ignited the debate over responsibility and design in AI.Please enjoy the show.And remember: Stay curious. Be disruptive. Keep Thinking on Paper.Cheers,Mark & Jeremy--Timestamps(00:00) Teaser(01:17) Adam Raine(01:28) Who Is Mustafa Suleyman?(02:36) The Run Up To Superintelligence(03:57) What Is Seemingly Conscious AI?(05:04) Philosophical Zombies (06:14) ChatGPT Is Just A Word Predictor(07:01) What Does It Take To Build A Seemingly Conscious AI?(08:08) The Illusion Of Conscious AI(09:59) How Different Are You To An AI?(11:39) Repeating The Covid Dynamic(13:27) OpenAI's Response To Adam Raine(15:02) The Dystopian Seemingly Conscious Timeline(18:18) Generation Text-Over-Talk(18:52) The Utopian Seemingly Conscious AI Timeline(21:22) AI Guardrails(23:43) Adam Raine Chat Log(26:18) Thinking On Paper(27:01) We Should Build AI For People, Not To Be A Person--LINKS:- Mustafa Suleyman Essay- Mustafa Suleyman X--Other ways to connect with Thinking On Paper:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Sep 7, 2025 • 6min
Kevin Kelly: Emotional Machines and the Future of Attachment
Kevin Kelly believes the next cultural shock won’t come from AI outsmarting us, but from it feeling something, or seeming to. He predicts that once we begin to code emotion into machines, people will start to bond with them the way they do with pets, partners, or even themselves.This isn’t science fiction. Emotional computation is already arriving: systems that respond with warmth, rejection, even guilt. Kelly argues that dependency won’t look like addiction; it’ll look like necessity. When something that shapes your thoughts never turns off, when your creativity depends on its presence, what exactly is being extended... The human mind or the machine’s illusion of it?For Kelly, this is the real frontier of AI: not intelligence, but intimacy. A technology that can mirror your feelings may never be conscious, but it will always be convincing.Please enjoy the (short) show.📺 Watch the full episode on our YouTube channel: Subscribe to our channel for more interviews like this.#kevinkelly #techinterviews

Sep 4, 2025 • 36min
MICROSOFT Is Using AI To Kill The Planet (And This Is The Proof) | Enabled Emissions
Artificial intelligence was supposed to accelerate the transition to clean energy. Instead, it’s being used to keep fossil fuels alive. Inside Microsoft, two engineers began asking questions no one wanted to answer. Holly and Will Alpine had joined the company believing AI could help solve the climate crisis. What they found instead was code trained to keep oil flowing.Through internal documents and contracts, they traced how Microsoft’s cloud tools — Azure, Cognitive Services, machine learning models — were being deployed across the oil and gas sector. Predicting drill sites. Extending refinery life cycles. Cutting extraction costs. The same AI designed for sustainability was fueling expansion.This isn’t a story about a single company. It’s about the moral architecture of the tech industry — how systems built for optimization erase responsibility. Holly and Will’s decision to speak out exposes a simple, devastating truth: the future isn’t being delayed by ignorance, but by intelligence used in service of the past.Please enjoy the show. --LINKS & RESOURCES- Enabled Emissions- Microsoft's Commitment to Sustainability - Exxon & Microsoft partnership press release- Microsoft Net Zero--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz--Timestamps(00:00) The Hidden Climate Cost of AI(01:44) Why Experts Call AI an Existential Threat(03:34) How Big Oil Uses AI to Pump More Fossil Fuels(07:46) Why Two Microsoft Insiders Started Enabled Emissions(11:14) Inside AI’s Growing Role in the Energy Sector(13:08) How much CO₂ comes from burning oil, and what does AI add?(16:17) The Guardrails Needed to Stop AI From Fueling Emissions(19:34) Microsoft’s Energy Principles: Policy or PR?(21:58) What are Scope 1, 2, and 3 emissions — and why do they matter?(24:26) How does Big Tech’s AI partnership with Big Oil affect Net Zero?(29:55) Why do we need international policy to regulate AI in energy?(32:39) AI for Good vs. AI for Fossil Fuels(34:14) What should humans be?--If you would like to sponsor Thinking On Paper, please contact us. Together, we can take the show to the next level.We love you all.We love the planet.Stay curious.Keep Thinking On Paper.

Sep 1, 2025 • 32min
Inside the Empire of AI: OpenAI and the New Power Structure | Karen Hao Book Review (Part 1)
The story of OpenAI isn’t about invention, it’s about consolidation of power. It's about ego, silicon valley and a small group of tech billionaires controlling artificial intelligence. The question you have to ask though: is it for humanity, or it is for them? It's Book Club time. And Mark and Jeremy are reading Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. The opening chapters trace how a nonprofit founded “for the good of humanity” became one of the most powerful private empires in history. Through internal memos, lawsuits, and leaked correspondence, Hao reveals the transformation of idealism into infrastructure, and how moral language was replaced by market logic.What began as an open challenge to Big Tech became its successor. As empires always do, it centralized power, redefined trust, and built belief into a business model. From the God complex of its founders to the quiet complicity of investors and governments, Empire of AI examines what happens when intelligence itself becomes property, and whether the tools built for humanity can ever truly belong to it.Please enjoy the show. Stay disruptive. Be curious, Keep Thinking on Paper.--Chapters(00:00) Introduction to Empire of AI(01:54) The Empire Strikes Back(05:13) Karen Hao, The Journalist(07:38) Do You Trust Open AI?(10:18) Why OpenAI Made ChatGPT(11:47) Scaling OpenAI(12:33) Google, Deep Mind and Ai For humanity(15:12) Greg Brockman(17:02) Sam Altman's Personal Brand 24:46 Timnit Gebru(25:25) How does AI benefit humanity?--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyzWatch the book club on our dedicated YouTube channel: https://youtu.be/OfQu65-6GuA

Aug 28, 2025 • 43min
AI AGENTS Will Rule The World... But First, The Agentic Web | Andrew Hill
Andrew Hill, co-founder of Recall, believes the next phase of the internet won’t be built on pages or apps, but on swarms of AI agents. Essentially pieces of code that remember, reason, and make decisions on your behalf (and spend Bitcoin), agents will form the new interface layer: where identity, memory, and trust replace passwords, browsers, and brands.In this conversation, we trace how agentic systems evolve from tools into collaborators, how they will coordinate between each other, negotiate access to our data, and rewire what “using the internet” even means. Hill argues that the next great challenge isn’t making AI smarter, but making it responsible, ensuring the web’s new memory layer remains transparent and human-aligned.It’s a quiet revolution: the shift from search to delegation, from browsing to briefing, from information to action.The agentic web is coming. This will help you get ready for what awaits. Please enjoy the show.And share with your most curious friend. Watch the show on the Thinking On Paper dedicated YouTube channel.--TIMESTAMPS(00:00) Disruptors & Curious Minds(01:25) What Is An AI Agent?(07:15) Emotional AI: Risks & Reality(12:49) Language, Evolution & AI(16:59) The Death Of Critical Thinking?(20:05) How To Trust AI Agents(24:27) Recall: Explained(39:49) What Should Humans Be?--LINKS & RESOURCES Learn more about Recall AI Agent training here.Follow Recall on X Follow Andrew Hill on X--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Aug 26, 2025 • 29min
Consciousness & The MEANING OF LIFE | Irreducible, Chapter 13
Mark and Jeremy reach the final chapter of Irreducible, a book that refuses to end where science usually stops. Federico Faggin proposes that consciousness is not a byproduct of matter but its foundation. The universe, he suggests, is a network of seities (quantum entities made of consciousness, agency, and identity) each trying to know itself through experience. What looks like evolution or emergence may instead be intention unfolding in physical form.Their conversation turns to the fault lines between mathematics and meaning. If information only counts bits and signals, what carries understanding? They trace the limits of Shannon’s information theory, question whether AI can ever move beyond pattern recognition, and define what Faggin calls “non-algorithmic comprehension.” Machines calculate. Humans comprehend. That difference might be the last frontier.As they close the book, Mark and Jeremy confront Faggin’s final provocation: that the distortion in human life comes from the need to feel superior—to nature, to others, to the One. Progress, he writes, must serve consciousness or it becomes perversion. The message is disarming in its simplicity. The universe is not a mechanism. It is a mind trying to remember itself.And yes, ultimately, it's a love story. Please enjoy the show. --Timestamps(00:00) Exploring Irreducible: A Journey Through Federico Fagin's Ideas(04:30) The Nature of Consciousness and the Role of Seities(09:32) Meaning, and the Human Experience(13:53) The Vibe Sphere: Music, Symbols, and Communication(18:48) Distortions in Self-Knowing(23:42) The Heart, Mind, and Gut: Centers of Knowing(27:29) What is the meaning of life? --Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Aug 21, 2025 • 36min
What Consciousness Reveals About Reality │ Federico Faggin, Irreducible 12
In Chapter 12 of Irreducible, Mark and Jeremy confront one of the book’s hardest ideas: that consciousness can’t be explained by equations or code.They trace how probability, prediction, and mathematics fall short of describing a universe that is always becoming. Meaning comes before symbols, and knowing comes before measurement. If the physical world is only an average of quantum states, then comprehension itself is a creative act.The discussion moves from the illusion of probability to the difference between simulation and emulation, asking what it really means to know. This is not a rejection of science, it’s a reminder that consciousness might be the missing variable.The closer we get to defining reality, the less certain we are that it can ever be defined.Please enjoy the show. -- Chapters (00:00) Why consciousness vs physics matters (02:15) “Becoming”: a universe that’s still unfolding(03:03) What “live information” really means(05:51) Probability isn't real(07:43) Creativity & AI: making vs. remixing (09:24) Meaning vs. syntax: why symbols alone aren’t enough (17:13) Are you an Observer or actor? Your role in quantum reality (21:55) Reverse engineering anxiety and happiness(27:09) Flow state: the texture of the present (31:05) Simulated minds vs. emulated minds (32:12) Consider our minds blown.--Follow and support Thinking On Paper:PODCAST: https://www.thinkingonpaper.xyz/INSTAGRAM: https://www.instagram.com/thinkingonpaperpodcast/--Thank you. And we love you.

Aug 19, 2025 • 38min
AI Customer Service That Feels Human | Momntum CEO, Brian Kenny
Customer service is a tricky one. It's ripe for an AI takeover, but 20 million people worldwide work in the industry. 95% of the industry is salaries. There is a lot of collateral damage there. Millions of jobs will vanish and not everyone can be re-trained. And yet, as anyone who has experienced customer service in 2025 can testify: it's pretty lousy. Frustrating. Annoying. Expensive. And how often do your queries, questions and complaints actually get answered?In this episode of Thinking on Paper, Brian Kenny, MOMNTUM’s co-founder, explains why today’s support systems are broken, and how building from first principles with AI can actually make customer service feel human again. And the results are staggering: MOMNTUM’s AI customer service agent Laila solves 86% of cases without handoff to a humans. Early signals also flash a potential 4,000% ROI. The future of more human and successful customer service is less humans and more AI. But at what cost?You'll Learn:Why slapping bots on old workflows makes service worseHow Laila spans phone, SMS, WhatsApp, Instagram DMs, and MessengerWhere AI can be trusted now and where it shouldn’t beWhat metrics really matter (hint: it’s not CSAT)The new rules of trust, disclosure, and human escalationPlease enjoy the show. --CHAPTERS(00:00) Why customer service is broken(03:30) What a modern support platform should look like(06:35) Using AI to make service feel personal(09:29) The data + privacy question(11:26) The only success metrics that really matter(15:09) Can machines create an emotional connection?(18:16) The real limits of today’s systems (and what Laila can’t do yet)(22:39) Where customer experience is headed next(34:22) What should humans be?-- Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyzLearn more on Momntum and Laila--Thank you. We love you. Stay peaceful.

Jul 15, 2025 • 36min
Space-Based Solar Power | Wireless Transmission & ELON MUSK'S GPU POWER PLAY
There is an energy crisis. There is an environmental crisis. And the two are about to collide.Mark and Jeremy speak with Martin Soltau, co-founder of Space Solar, about the race to build space-based solar power in orbit, a system that could beam clean electricity to Earth twenty-four hours a day.While fossil fuel giants use artificial intelligence to find new oil and gas, engineers are building satellites that could replace them. With projected costs as low as $30 per megawatt hour, space-based solar could change the economics of power and the politics that shape it.This episode examines the engineering, economics, and policy shifts that could make orbit-generated clean energy inevitable.The question isn’t whether we can capture the sun’s power. It’s whether we’ll use it in time.Please Enjoy the show. And share with a curious friend.--Learn More: https://www.spacesolar.co.uk/-- Timestamps (00:00) Disruptors And Curious Minds (01:34) Space Based Solar Satellites (05:10) The Ground Infrastructure (07:37) SBSP V Nuclear, Coal & Gas (12:05) Launch Costs (13:55) Data Centers In Space (15:36) Scaling Space Based Solar Power (18:08) Manufacturing In Space (20:25) The Eisenhower Of SBSP (23:10) The Politics Of Space Based Solar Power (28:35) Energy Is Everything (31:05) The Government Perspective --Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Jul 8, 2025 • 38min
Machines of Loving Grace: What Happens If Humanity Gets AI Right? │ Dario Amodei
What if AI doesn’t destroy the world, but changes it faster than we can understand?In Machines of Loving Grace, Dario Amodei imagines a future where artificial intelligence works exactly as intended: curing disease, ending poverty, and giving humanity everything it ever wanted.Mark and Jeremy break down that vision: a “country of geniuses in a data center,” where AI drives biology, neuroscience, economics, and governance to their limits. They examine the optimism, the blind spots, and the moral cost of progress that moves faster than culture.Because even if AI gets it right, the question remains, can we?Please enjoy the show. And share with a curious friend.--Read the essay: https://www.darioamodei.com/essay/machines-of-loving-grace#fn:1--Follow and Support Thinking On Paper🎙️PODCAST: www.thinkingonpaper.xyz📸 INSTAGRAM: https://www.instagram.com/thinkingonpaperpodcast/X: https://x.com/thinkonpaperpod--Chapters(00:00) Machines Of Loving Grace(02:46) Dario Amodei's Definition Of Powerful AI(05:11) Speed Of The Outside World(07:24) Complexity(11:27) List Of Diseases AI Will Cure(13:46) Neuroscience And Mind(15:45) AI For Everyone?(21:33) Peace And Government(25:03) Work And Meaning(31:56) What's Meaningful?(34:27) Taking Stock--Connect the dots of AI, quantum and emerging technology and watch these videos next:


