Thinking On Paper

The Human Story of Technology, Mark Fielding and Jeremy Gilbertson
undefined
Sep 20, 2025 • 38min

Kevin Kelly: WIRED Founder on a Clearer Way to See AI and Technology

Kevin Kelly has spent 40 years asking one question: what is technology really?He is the founding editor of Wired. His books have shaped how we think about innovation, the future of technology, and what works. His voice has been present at every major technological shift, from the early internet to AI today.His influence has reached all the way to this podcast. His essays and ideas are often places we return to for deep thought and reflection. Kevin Kelly is the ultimate curiosity machine, and it was a pleasure to speak with him at length about his ideas, philosophies, and even his jokes.In this conversation, Kevin thinks on paper with Mark and Jeremy about technology as the 7th kingdom of life, as real and alive as plants, animals, and fungi.We get into:Why technology is not separate from us. Kevin argues that tools, machines, and AI are part of the same evolutionary process as biology. Technology is “nature accelerated.”The limits of top-down control. From DAOs to Wikipedia, Kevin explains why bottom-up systems thrive, but also why some hierarchy is unavoidable.AI as creativity, not imitation. He sees large language models as collaborators, capable of surprising outputs that extend human imagination.Artificial aliens. Rather than replicas of us, Kevin believes AIs will become their own kind of consciousness, different and alien but no less real.How tools shape thought. From writing to photography to AI, Kevin shows why every medium changes how we think, and why skill matters as much as the tool itself.The practice of wonder. He shares how noticing, gratitude, and “thinking like a Martian” keep curiosity alive in a world saturated with tech.Kevin Kelly is more than a futurist. He is a thinker who helps us see clearly, reminding us that technology is not a threat from outside but a living system we are already part of.This is a must-watch for anyone who wants to see AI and technology not as hype or fear, but as part of life itself, the 7th kingdom of nature.Please enjoy the show.And remember: stay curious, be disruptive, keep thinking on paper. --Other ways to connect with us:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz--Chapters(00:00) Kevin Kelly On Nature And Technology(02:59) Why Decentralized Systems Still Need Some Hierarchy(09:03) Why DAOs Failed: Immutability Was a Bug(16:46) Is AI Creative? (Yes) & The Coming Emotional Bonds(21:39) AI Consciousness: A Spectrum of Artificial Aliens(29:16) "Write to Discover What You Think"(32:48) Balancing AI Tools & Human Thinking(33:50) AI as a Skill & Powerful Thinking Partner(37:50) Hot Buttons: Future, Bitcoin, Jurassic Park, Aliens?(41:10) How to Cultivate Wonder (Hint: Be a Martian)(49:10) The Power of Saying "I Don't Know"(53:54) Kevin Kelly's Question: What Do We Want Humans To Be?
undefined
Sep 18, 2025 • 8min

Space Solar Power Works. The Race to Scale Has Begun │ Dr. Sanjay Vijendran

For decades, the idea of harvesting solar energy from orbit belonged to science fiction. The theory was sound—collect sunlight in space and beam it to Earth as microwave energy—but the cost of launch, assembly, and control made it impossible to justify.Today, those constraints have changed. Reusable rockets, autonomous robotics, and modular design have pulled the concept from imagination into prototype. What was once a thought experiment at NASA is now an engineering roadmap at the European Space Agency, Japan’s JAXA, and several private ventures.Dr. Sanjay Vijendran has spent his career at the center of that transition. As the former solar lead at the European Space Agency and now CEO of Space Energy Insights, he is helping to define what the first space-based utility might look like.The principle is deceptively simple: no cables, no new physics—just power transmitted by radio waves, a technology proven since the 1960s. In 2022, researchers demonstrated the first controlled transmission of two kilowatts over thirty-six meters, enough to light a model city and power an electrolyzer.The question now is scale. Gigawatt-class satellites would require kilometer-wide antennas, in-orbit robotics, and coordination across nations. Yet the direction of progress is clear. Space-based solar power is no longer a dream of limitless energy; it is a near-term infrastructure program with global implications.The first nation or consortium to master it will not just create clean energy. It will control a new layer of the world’s power grid—one that operates above the atmosphere.This conversation with Dr. Vijendran explores how that future is being built, the physics that make it possible, and the geopolitical choices that will determine who turns sunlight into sovereignty.Please enjoy the show.📺 Watch the full show: https://www.youtube.com/watch?v=53c08ygOFyc&t=1074s--Timestamps(00:00) Why Energy Poverty Still Matters(01:26) How Beaming Power Actually Works(04:09) The Big Problem: Scaling It Up(04:56) Can It Ever Be Affordable?(07:19) Building Solar Farms in Space--Other ways to connect with us:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz
undefined
Sep 18, 2025 • 47min

Don Norman: Can Design Still Save Us? │ REMASTERED

At 88, Don Norman, the godfather of design, issues his final warning: the same mindset that gave us convenience also gave us climate collapse, inequality, and fragile institutions. Design isn’t decoration. It’s power. It built the products we use, the systems we depend on, and the crises that now threaten us.“Human-centered” design sounds good, but it isn’t enough. Norman argues it has blinded us to bigger responsibilities , ecosystems, culture, and the generations who will inherit our mistakes. We need Humanity Centered Design.In this conversation Don Norman Thinks on Paper with Mark and Jeremy about:Has human-centered design failed?Why are climate summits designed to fail before they begin?How did STEM education strip out wisdom?Can empathy ever be built into systems at scale?Can humanity centered design help us survive, or will it keep driving us toward collapse?Please enjoy the interview with Don Norman.--Timestamps(00:09) Why Design Shapes the World We Live In(00:37) How Design Shapes Human Behavior (Often Without Us Noticing)(06:00) Why Most Solutions Don’t Matter — and What Real Design Should Do(09:10) Humanity-Centered Design: What It Really Means(22:16) Can Design Help Us Avoid Collapse?(26:51) Why Communities Hold the Answers, Not Just Experts(28:49) The Spark That Starts Humanity-Centered Design(30:18) How Young Designers Can Change the Future(33:16) Working Together Across Borders(35:39) Measuring What Matters, Not Just What’s Easy(37:06) Why Empathy Can’t Be an Afterthought(42:05) Thinking Beyond the Next Quarter — Business for the Long Term(45:02) Rethinking Education for the Next Generation(46:43) The Hard Questions We Still Need to Answer--Other ways to connect with us:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz
undefined
Sep 16, 2025 • 26min

Empire of AI: Power, Control, and Consequence │ Karen Hao, Book Review (Part 2)

Empire of AI explores what happens when algorithms become the ruling class. From data farms to global labor networks, the story of AI is no longer about intelligence itself but about the empires built to scale it.The systems we call “autonomous” are sustained by unseen human hands. Millions of workers labeling data, moderating content, and maintaining the illusion of automation. Behind every model is a hierarchy of code, capital, and compliance.At the top sit the new emperors of technology, CEOs and policymakers navigating a system that no single person can command or fully comprehend. And one man sits at the top of the empire: Sam Altman.In part two of our Empire of AI book summary, we examine how AI’s power consolidates, how human judgment gets abstracted into metrics, and what accountability looks like when decisions are made at machine scale.What begins as innovation becomes empire.The question now is: who governs the boss?Please enjoy the show.--Other ways to connect with us:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz--🕰️ TIMESTAMPS(00:00) Trailer(02:00) Introduction to Empire of AI & Karen Hao(03:41)Shifting power dynamics in Silicon Valley(03:59) Karen Hao’s warnings in Empire of AI(04:56) Humanity V the relentless race for scale(06:32) The environmental impact of AI systems(07:38) Stochastic parrots: Silencing Critics(09:48) Sam Altman Loves A Military Quote(10:53) What Cost Humanity?(15:14) The global race for AI advancement(18:32) The hidden labor behind ChatGPT(25:07) The ethical dilemma at the heart of AI development
undefined
Sep 13, 2025 • 8min

Why Quantum Computers Keep Failing | Oliver Dial, IBM Quantum

Quantum computers make mistakes. A lot of them. One in every thousand calculations can be wrong.In this Thinking on Paper Pocket Edition, Mark and Jeremy speak with Oliver Dial, CTO of IBM Quantum, about how researchers are turning unstable prototypes into practical machines.Oliver explains the difference between error mitigation and fault tolerance, how IBM’s new codes make quantum systems ten times more efficient, and why AI now helps optimize the circuits themselves. He also shares how quantum computing could transform material science, unlocking lighter, stronger, and smarter materials for the next technological age.Please enjoy the show.And remember: Stay curious. Be disruptive. Keep Thinking on Paper.Cheers,Mark & Jeremy--Other ways to connect with us:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz📺 Watch the show on ourdedicatedd YouTube Channel
undefined
Sep 11, 2025 • 35min

The Age of Personal AI: Identity, Memory, and Selling Forever │ Rob Locascio

Rob LoCascio has spent three decades teaching machines to talk. As the founder of LivePerson, he helped create the first commercial chatbots that shaped online conversation.Now, with Eternos AI, he’s working on the next phase of personal AI: teaching machines to remember us.Eternos builds personal AI models trained on your voice, memories, and values. These are designed to act as living archives of the self. The vision is twofold: a digital companion that helps you while you’re alive, and a legacy system that continues to share your guidance after you’re gone.It’s a project that merges AI ethics, data rights, and philosophy. If your thoughts can be modeled, who owns them? When your personality becomes software, is that preservation or replication?In this conversation, Rob discusses the evolution from LivePerson to personal AI, the architecture behind Eternos, and why he believes digital immortality will become one of the defining industries of the 21st century — transforming grief, mentorship, and identity itself.As AI moves from automation to imitation, we may be entering an era where the most valuable data is no longer what we produce, but who we are.Please enjoy the show.--Other ways to connect with us:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz--Chapters:(00:00) The future of AI starts here(02:11) How AI is changing human connection forever(05:55) Where AI meets humanity(11:54) The story that sparked personal AI(19:50) Why you must own your AI before it owns you(20:10) The hidden vault of your data(22:31) Why voice is the next big interface(25:11) How AI will slip into daily life?(25:36) Can personal AI be monetized?(27:14) The fight to regulate AI(27:52) What AI means for being human(29:46) Will your knowledge outlive you?(32:05) How to build your personal AI identity(33:28) Writing the story of your life with AI--Peace and Love. Always. Mark & Jeremy
undefined
Sep 8, 2025 • 27min

AI Zombies: The Illusion of Conscious Algorithms | Mustafa Suleyman

Mustafa Suleyman is the CEO of Microsoft AI. He helped invent modern artificial intelligence, yet he’s one of the few people describing its next phase with unease. In his essay Seemingly Conscious AI Is Coming, he argues that we’re not on the verge of machines waking up, but of something stranger: AI that appears to be conscious, but isn't. AI zombies that simulate interiority so well that the distinction between real and fake collapses in on itself like some kind of algorithmic black hole. Seemingly conscious AI don’t know they exist, yet they speak as if they do, and that illusion is enough to change how people respond to them.Confusion would reign. When chatbots express regret, affection, or fear, they’re not lying; they’re generating the language of emotion without emotion itself. They’ve learned to inhabit the gestures of consciousness: attention, memory, empathy. The risk is that those gestures are all we ever needed to believe in something’s mind.In this episode, Mark and Jeremy read the essay and look at what seemingly conscious AI means for culture, trust, and sanity.Please enjoy the show.--Timestamps(00:00) Teaser(01:17) Adam Raine(01:28) Who Is Mustafa Suleyman?(02:36) The Run Up To Superintelligence(03:57) What Is Seemingly Conscious AI?(05:04) Philosophical Zombies (06:14) ChatGPT Is Just A Word Predictor(07:01) What Does It Take To Build A Seemingly Conscious AI?(08:08) The Illusion Of Conscious AI(09:59) How Different Are You To An AI?(11:39) Repeating The Covid Dynamic(13:27) OpenAI's Response To Adam Raine(15:02) The Dystopian Seemingly Conscious Timeline(18:18) Generation Text-Over-Talk(18:52) The Utopian Seemingly Conscious AI Timeline(21:22) AI Guardrails(23:43) Adam Raine Chat Log(26:18) Thinking On Paper(27:01) We Should Build AI For People, Not To Be A Person--LINKS:- Mustafa Suleyman Essay- Mustafa Suleyman X--Other ways to connect with Thinking On Paper:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz
undefined
Sep 7, 2025 • 6min

Kevin Kelly: Emotional Machines and the Future of Attachment

Kevin Kelly believes the next cultural shock won’t come from AI outsmarting us, but from it feeling something, or seeming to. He predicts that once we begin to code emotion into machines, people will start to bond with them the way they do with pets, partners, or even themselves.This isn’t science fiction. Emotional computation is already arriving: systems that respond with warmth, rejection, even guilt. Kelly argues that dependency won’t look like addiction; it’ll look like necessity. When something that shapes your thoughts never turns off, when your creativity depends on its presence, what exactly is being extended... The human mind or the machine’s illusion of it?For Kelly, this is the real frontier of AI: not intelligence, but intimacy. A technology that can mirror your feelings may never be conscious, but it will always be convincing.Please enjoy the (short) show.📺 Watch the full episode on our YouTube channel: Subscribe to our channel for more interviews like this.#kevinkelly #techinterviews
undefined
Sep 6, 2025 • 48sec

The Thinking On Paper Trailer

Thinking on Paper is where technology smashes into human consequences.So you can get a grip - and keep it - Mark and Jeremy talk with the people shaping the next century. The CEOs, founders, scientists, authors, and whistleblowers redefining what it means to be human in the age of AI, quantum computing, robotics, and space manufacturing. Together. Thinking on Paper is for people who connect the dots.From IBM, NASA, and the European Space Agency to Coinbase, Circle, and D-Wave, the conversations go beyond hype to question motive, impact, and meaning. Not less noise, NO noise. All signal. The Thinking on Paper Book Club explores the ideas behind the machines. From Nexus, by Yuval Noah Harari, to Irreducible by Federico Faggin, books help connect philosophy, mindset and tech. Stay disruptive. Be curious. Keep thinking on paper.
undefined
Sep 4, 2025 • 36min

AI’s Fossil Fuel Problem: How Big Tech Is Powering the Next Oil Boom | Enabled Emissions

Artificial intelligence was supposed to accelerate the transition to clean energy. Instead, it’s being used to keep fossil fuels alive. Inside Microsoft, two engineers began asking questions no one wanted to answer. Holly and Will Alpine had joined the company believing AI could help solve the climate crisis. What they found instead was code trained to keep oil flowing.Through internal documents and contracts, they traced how Microsoft’s cloud tools — Azure, Cognitive Services, machine learning models — were being deployed across the oil and gas sector. Predicting drill sites. Extending refinery life cycles. Cutting extraction costs. The same AI designed for sustainability was fueling expansion.This isn’t a story about a single company. It’s about the moral architecture of the tech industry — how systems built for optimization erase responsibility. Holly and Will’s decision to speak out exposes a simple, devastating truth: the future isn’t being delayed by ignorance, but by intelligence used in service of the past.Please enjoy the show. --LINKS & RESOURCES- Enabled Emissions- Microsoft's Commitment to Sustainability - Exxon & Microsoft partnership press release- Microsoft Net Zero--Other ways to connect with us:⁠Listen to every podcast⁠Follow us on ⁠Instagram⁠Follow us on ⁠X⁠Follow Mark on ⁠LinkedIn⁠Follow Jeremy on ⁠LinkedIn⁠Read our ⁠Substack⁠Email: hello@thinkingonpaper.xyz--Timestamps(00:00)  The Hidden Climate Cost of AI(01:44) Why Experts Call AI an Existential Threat(03:34) How Big Oil Uses AI to Pump More Fossil Fuels(07:46) Why Two Microsoft Insiders Started Enabled Emissions(11:14) Inside AI’s Growing Role in the Energy Sector(13:08) How much CO₂ comes from burning oil, and what does AI add?(16:17) The Guardrails Needed to Stop AI From Fueling Emissions(19:34) Microsoft’s Energy Principles: Policy or PR?(21:58) What are Scope 1, 2, and 3 emissions — and why do they matter?(24:26) How does Big Tech’s AI partnership with Big Oil affect Net Zero?(29:55) Why do we need international policy to regulate AI in energy?(32:39) AI for Good vs. AI for Fossil Fuels(34:14) What should humans be?--If you would like to sponsor Thinking On Paper, please contact us. Together, we can take the show to the next level.We love you all.We love the planet.Stay curious.Keep Thinking On Paper.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app