

Machines Like Us
The Globe and Mail
Machines Like Us is a technology show about people.
We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter.
Every other Tuesday Taylor Owen sits down with the people shaping this rapidly approaching future. He’ll speak with entrepreneurs building world-changing technologies, lawmakers trying to ensure they’re safe, and journalists and scholars working to understand how they’re transforming our lives.
We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter.
Every other Tuesday Taylor Owen sits down with the people shaping this rapidly approaching future. He’ll speak with entrepreneurs building world-changing technologies, lawmakers trying to ensure they’re safe, and journalists and scholars working to understand how they’re transforming our lives.
Episodes
Mentioned books

Dec 30, 2025 • 53min
The Man Behind the World’s Most Coveted Microchip
Jensen Huang is something of an enigma. The NVIDIA CEO doesn’t have social media and, until recently, rarely gave interviews. Yet he may be the most important person in AI.Under his leadership, NVIDIA has become a goliath. Somewhere between 80 and 90 per cent of AI tools run on NVIDIA hardware, making it the world’s most valuable company. But unlike his contemporaries, Huang has been remarkably quiet about the technology – and the world – he’s building.In his new book, The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, journalist Stephen Witt pulls back the curtain. And what he finds is, at times, shocking: Huang believes there is zero risk in developing superintelligence.So who is Jensen Huang? And should we worry that the most powerful person in AI is racing forward at breakneck speed, blind to the potential consequences?Mentioned:The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, by Stephen WittHow Jensen Huang’s Nvidia Is Powering the A.I. Revolution, by Stephen Witt (The New Yorker)The A.I. Prompt That Could End the World, by Stephen Witt (New York Times)Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Media sourced from the BBC. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 16, 2025 • 44min
Wikipedia Won Our Trust. Can We Use That Model Everywhere?
It was an idea that defied logic: an online encyclopedia that anyone could edit.You didn’t need to have a PhD or even use your real name – you just needed an internet connection. Against all odds, it worked. Today, billions of people use Wikipedia every month, and studies show it’s about as accurate as a traditional encyclopedia.But how? How did Wikipedia not just turn into yet another online cesspool, filled with falsehoods, partisanship and AI slop? Wikipedia founder Jimmy Wales just wrote a book called The Seven Rules of Trust, where he explains how he was able to build that rarest of things: a trustworthy source of information on the internet. In an era when trust in institutions is collapsing, Wales thinks he’s found a blueprint – not just for the web, but for everything else too.Mentioned:The Seven Rules of Trust by Jimmy Wales and Dan GardnerA False Wikipedia ‘Biography’ by John Seigenthaler (USA Today)Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Photo Illustration: The Globe and Mail/Brendan McDermid/Reuters Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 2, 2025 • 53min
Could an Alternative AI Save Us From a Bubble?
Over the last couple of years, massive AI investment has largely kept the stock market afloat. Case in point: the so-called Magnificent 7 – tech companies like NVIDIA, Meta, and Microsoft – now account for more than a third of the S&P 500’s value. (Which means they likely represent a significant share of your investment portfolio or pension fund, too.)There’s little doubt we’re living through an AI economy. But many economists worry there may be trouble ahead. They see companies like OpenAI – valued at half a trillion dollars while losing billions every month – and fear the AI sector looks a lot like a bubble. Because right now, venture capitalists aren’t investing in sound business plans. They’re betting that one day, one of these companies will build artificial general intelligence.Gary Marcus is skeptical. He’s a professor emeritus at NYU, a bestselling author, and the founder of two AI companies – one of which was acquired by Uber. For more than two decades, he’s been arguing that large language models (LLMs) – the technology underpinning ChatGPT, Claude, and Gemini – just aren’t that good.Marcus believes that if we’re going to build artificial general intelligence, we need to ditch LLMs and go back to the drawing board. (He thinks something called “neurosymbolic AI” could be the way forward.)But if Marcus is right – if AI is a bubble and it’s about to pop – what happens to the economy then?Mentioned:The GenAI Divide: State of AI in Business 2025, by Project Nanda (MIT)MIT study finds AI can already replace 11.7% of U.S. workforce, by MacKenzie Sigalos (CNBC)The Algebraic Mind, by Gary MarcusWe found what you’re asking ChatGPT about health. A doctor scored its answers, by Geoffrey A. Fowler (The Washington Post) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 18, 2025 • 51min
Can AI Lead Us to the Good Life?
Rutger Bregman, a historian and author known for his engaging works on social change, dives into the intersection of AI and ethics. He explores whether AI can truly lead us to a better life or if it poses existential threats. Bregman argues for Universal Basic Income as a response to job displacement and emphasizes public involvement in tech decisions. Drawing parallels to historical movements, he discusses the moral responsibilities of society in shaping AI's future. Can we use technology to create a more equitable world? Bregman believes it’s possible, but the path is fraught with challenges.

Nov 4, 2025 • 50min
How to Survive the “Broligarchy”
Carole Cadwalladr, an investigative journalist renowned for exposing the Cambridge Analytica scandal, discusses the rise of techno-authoritarianism and the alarming influence of Big Tech on democracy. She explores the failures of regulation post-2016 and how tech giants use data architecture for surveillance. The conversation highlights the gender dynamics in tech leadership, the implications of AI on journalism and labor, and the urgent need for public action to defend democratic values against tech consolidation.

12 snips
Oct 21, 2025 • 1h 3min
AI Music is Everywhere. Is it Legal?
Ed Newton Rex, a classical composer and former Stability AI music team lead, dives into the controversial world of AI-generated music. He argues that these creations often mirror existing art, blurring the lines of copyright and creativity. Ed discusses the legality of training AI on copyrighted works, labeling it as theft, and emphasizes the need for fair compensation through licensing. He warns of the broader cultural impact if AI takes over art and advocates for a new humanist movement to preserve authentic creativity.

39 snips
Oct 7, 2025 • 1h 9min
Geoffrey Hinton vs. The End of the World
Geoffrey Hinton, the 'godfather of AI' and a neural network pioneer, shares his profound concerns about the existential risks of artificial intelligence. He discusses how large language models have accelerated his fears of AI consciousness and potential misalignment. Hinton warns that competition fuels rapid development, often sidelining safety. He proposes that future AI should embody 'maternal' care for humanity to ensure safety. Ultimately, he emphasizes the crucial need for public education and collective efforts to manage the future of AI.

Sep 23, 2025 • 50min
AI is Upending Higher Education. Is That a Bad Thing?
Just two months after ChatGPT was launched in 2022, a survey found that 90 per cent of college students were already using it. I’d be shocked if that number wasn’t closer to 100 per cent by now.Students aren’t just using artificial intelligence to write their essays. They’re using it to generate ideas, conduct research, and summarize their readings. In other words: they’re using it to think for them. Or, as New York Magazine recently put it: “everyone is cheating their way through college.”University administrators seem paralyzed in the face of this. Some worry that if we ban tools like ChatGPT, we may leave students unprepared for a world where everyone is already using them. But others think that if we go all in on AI, we could end up with a generation capable of producing work – but not necessarily original thought.I’m honestly not sure which camp I fall into, so I wanted to talk to two people with very different perspectives on this.Conor Grennan is the Chief AI Architect at NYU’s Stern School of Business, where he’s helping students and educators embrace AI. And Niall Ferguson is a senior fellow at Stanford and Harvard, and the co-founder of the University of Austin. Lately, he’s been making the opposite argument: that if universities are to survive, they largely need to ban AI from the classroom. Whichever path we take, the consequences will be profound. Because this isn’t just about how we teach and how we learn – it’s about the future of how we think.Mentioned:AI’s great brain robbery – and how universities can fight back, by Niall Ferguson (The London Times)Everyone Is Cheating Their Way Through College, by James D. Walsh (New York Magazine)Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, by Nataliya Kos’myna (MIT Media Lab)The Diamond Age, by Neal StephensonHow the Enlightenment Ends, by Henry A. KissingerMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Host direction by Athena Karkanis. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at the Globe & Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

4 snips
Apr 22, 2025 • 1h 9min
Jim Balsillie: ‘Canada’s Problem Isn’t Trump. Canada’s Problem Is Canada’
Jim Balsillie, former co-CEO of Research in Motion and a prominent Canadian business figure, discusses Canada's pressing economic issues and its fraught relationship with the U.S. He critiques the outdated economic model that has left Canada lagging in productivity and wealth, urging for a reevaluation of policies to boost innovation and self-sufficiency. Balsillie emphasizes the need for Canada to prioritize domestic growth, addressing corporate influence and advocating for enhanced transparency and civic engagement in politics.

Apr 8, 2025 • 39min
The Changing Face of Election Interference
We’re a few weeks into a federal election that is currently too close to call. And while most Canadians are wondering who our next Prime Minister will be, my guests today are preoccupied with a different question: will this election be free and fair?In her recent report on foreign interference, Justice Marie-Josée Hogue wrote that “information manipulation poses the single biggest risk to our democracy”. Meanwhile, senior Canadian intelligence officials are predicting that India, China, Pakistan and Russia will all attempt to influence the outcome of this election. To try and get a sense of what we’re up against, I wanted to get two different perspectives on this. My colleague Aengus Bridgman is the Director of the Media Ecosystem Observatory, a project that we run together at McGill University, and Nina Jankocwicz is the co-founder and CEO of the American Sunlight Project. Together, they are two of the leading authorities on the problem of information manipulation.Mentioned:“Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions,” by the Honourable Marie-Josée Hogue"A Pro-Russia Content Network Foreshadows the Automated Future of Info Ops,” by the American Sunlight ProjectFurther Reading:“Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate,” by Victor Goury-Laffont (Politico)“2025 Federal Election Monitoring and Response,” by the Canadian Digital Media Research Network“Election threats watchdog detects Beijing effort to influence Chinese Canadians on Carney,” by Steven Chase (Globe & Mail)“The revelations and events that led to the foreign-interference inquiry,” by Steven Chase and Robert Fife (Globe & Mail)“Foreign interference inquiry finds ‘problematic’ conduct,” by The Decibel Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


