
Machines Like Us
Machines Like Us is a technology show about people.
We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter.
Every other Tuesday Taylor Owen sits down with the people shaping this rapidly approaching future. He’ll speak with entrepreneurs building world-changing technologies, lawmakers trying to ensure they’re safe, and journalists and scholars working to understand how they’re transforming our lives.
Latest episodes

Apr 8, 2025 • 39min
The Changing Face of Election Interference
We’re a few weeks into a federal election that is currently too close to call. And while most Canadians are wondering who our next Prime Minister will be, my guests today are preoccupied with a different question: will this election be free and fair?In her recent report on foreign interference, Justice Marie-Josée Hogue wrote that “information manipulation poses the single biggest risk to our democracy”. Meanwhile, senior Canadian intelligence officials are predicting that India, China, Pakistan and Russia will all attempt to influence the outcome of this election. To try and get a sense of what we’re up against, I wanted to get two different perspectives on this. My colleague Aengus Bridgman is the Director of the Media Ecosystem Observatory, a project that we run together at McGill University, and Nina Jankocwicz is the co-founder and CEO of the American Sunlight Project. Together, they are two of the leading authorities on the problem of information manipulation.Mentioned:“Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions,” by the Honourable Marie-Josée Hogue"A Pro-Russia Content Network Foreshadows the Automated Future of Info Ops,” by the American Sunlight ProjectFurther Reading:“Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate,” by Victor Goury-Laffont (Politico)“2025 Federal Election Monitoring and Response,” by the Canadian Digital Media Research Network“Election threats watchdog detects Beijing effort to influence Chinese Canadians on Carney,” by Steven Chase (Globe & Mail)“The revelations and events that led to the foreign-interference inquiry,” by Steven Chase and Robert Fife (Globe & Mail)“Foreign interference inquiry finds ‘problematic’ conduct,” by The Decibel

Mar 25, 2025 • 37min
How Do You Report the News in a Post-Truth World?
If you’re having a conversation about the state of journalism, it’s bound to get a little depressing. Since 2008, more than 250 local news outlets have closed down in Canada. The U.S. has lost a third of the newspapers they had in 2005. But this is about more than a failing business model. Only 31 percent of Americans say they trust the media. In Canada, that number is a little bit better – but only a little. The problem is not just that people are losing their faith in journalism. It’s that they’re starting to place their trust in other, often more dubious sources of information: TikTok influencers, Elon Musk’s X feed, and The Joe Rogan Experience. The impact of this shift can be seen almost everywhere you look. 15 percent of Americans believe climate change is a hoax. 30 percent believe the 2020 election was stolen. 10 percent believe the earth is flat. A lot of this can be blamed on social media, which crippled journalism's business model and led to a flourishing of false information online. But not all of it. People like Jay Rosen have long argued that journalists themselves are at least partly responsible for the post-truth moment we now find ourselves in. Rosen is a professor of journalism at NYU who’s been studying, critiquing, and really shaping, the press for nearly 40 years. He joined me a couple of weeks ago at the Attention conference in Montreal to explain how we got to this place – and where we might go from here. A note: we recorded this interview before the Canadian election was called, so we don’t touch on it here. But over the course of the next month, the integrity of our information ecosystem will face an inordinate amount of stress, and conversations like this one will be more important than ever. Mentioned:"Digital News Report Canada 2024 Data: An Overview," by Colette Brin, Sébastien Charlton, Rémi Palisser, Florence Marquis "America’s News Influencers," by Galen Stocking, Luxuan Wang, Michael Lipka, Katerina Eva Matsa,Regina Widjaya,Emily Tomasik andJacob LiedkeFurther Reading: "Challenges of Journalist Verification in the Digital Age on Society: A Thematic Review," Melinda Baharom, Akmar Hayati Ahmad Ghazali, Abdul Muati, Zamri Ahmad"Making Newsworthy News: The Integral Role of Creativity and Verification in the Human Information Behavior that Drives News Story Creation," Marisela Gutierrez Lopez, Stephann Makri, Andrew MacFarlane, Colin Porlezza, Glenda Cooper, Sondess Missaoui"The Trump Administration and the Media (2020)," by Leonard Downie Jr. for the Committee to Protect Journalists.

Mar 11, 2025 • 40min
A Chinese Company Upended OpenAI. We May Be Looking at the Story All Wrong.
When the American company OpenAI released ChatGPT, it was the first time that a lot of people had ever interacted with Generative AI. ChatGPT has become so popular that, for many, it’s now synonymous with artificial intelligence.But that may be changing. Earlier this year a Chinese startup called DeepSeek launched its own AI chatbot, sending shockwaves across Silicon Valley. According to DeepSeek, their model – DeepSeek-R1 – is just as powerful as ChatGPT but was developed at a fraction of the cost. In other words, this isn’t just a new company, it could be an entirely different approach to building artificial intelligence.To try and understand what DeepSeek means for the future of AI, and for American innovation, I wanted to speak with Karen Hao. Hao was the first reporter to ever write a profile on OpenAI and has covered AI for The MIT Tech Review, The Atlantic and the Wall Street Journal. So she’s better positioned than almost anyone to try and make sense of this seemingly monumental shift in the landscape of artificial intelligence.Mentioned:“The messy, secretive reality behind OpenAI’s bid to save the world,” by Karen HaoFurther Reading:“DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” by DeepSeek-AI and others.“A Comparison of DeepSeek and Other LLMs,” by Tianchen Gao, Jiashun Jin, Zheng Tracy Ke, Gabriel Moryoussef“Technical Report: Analyzing DeepSeek-R1′s Impact on AI Development,” by Azizi Othman

Feb 25, 2025 • 30min
Big Tech Hijacked Our Attention. Chris Hayes Wants To Win It Back.
Do I have your attention right now? I’m guessing probably not. Or, at least, not all of it. In all likelihood, you’re listening to this on your morning commute, or while you wash the dishes or check your e-mail.We are living in a world of perpetual distraction. There are more things to read, watch and listen to than ever before – but our brains, it turns out, can only absorb so much. Politicians like Donald Trump have figured out how to exploit this dynamic. If you’re constantly saying outrageous things, it becomes almost impossible to focus on the things that really matter. Trump’s former strategist Steve Bannon called this strategy “flooding the zone.”As the host of the MSNBC show All In, Chris Hayes has had a front-row seat to the war for our attention – and, now, he’s decided to sound the alarm with a new book called The Sirens’ Call: How Attention Became the World’s Most Endangered Resource.Hayes joined me to explain how our attention became so scarce, and what happens to us when we lose the ability to focus on the things that matter most.Mentioned:"Twitter and Tear Gas: The Power and Fragility of Networked Protest," by Zeynep TufekciFurther Reading:"Ethics of the Attention Economy: The Problem of Social Media Addiction," by Vikram R. Bhargava and Manuel Velasquez."The Attention Economy Labour, Time and Power in Cognitive Capitalism," by Claudio Celis Bueno“The business of news in the attention economy: Audience labor and MediaNews Group’s efforts to capitalize on news consumption,” Brice Nixon

Feb 11, 2025 • 36min
New Spyware Has Made Your Phone Less Secure Than You Might Think
It’s become pretty easy to spot phishing scams: UPS orders you never made, banking alerts from companies you don’t bank with, phone calls from unfamiliar area codes. But over the past decade, these scams – and the technology behind them – have become more sophisticated, invasive and sinister, largely due to the rise of something called ‘mercenary spyware.’The most potent version of this tech is Pegasus, a surveillance tool developed by an Israeli company called NSO Group. Once Pegasus infects your phone, it can see your texts, track your movement, and download your passwords – all without you realizing you’d been hacked.We know a lot of this because of Ron Deibert. Twenty years ago, he founded Citizen Lab, a research group at the University of Toronto that has helped expose some of the most high profile cases of cyber espionage around the world.Ron has a new book out called Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy, and he sat down with me to explain how spyware works, and what it means for our privacy – and our democracy.Note: We reached out to NSO Group about the claims made in this episode and they did not reply to our request for comment.Mentioned:“Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy,” by Ron Deibert“Meta’s WhatsApp says spyware company Paragon targeted users in two dozen countries,” by Raphael Satter, ReutersFurther Reading:“The Autocrat in Your iPhone,” by Ron Deibert“A Comprehensive Analysis of Pegasus Spyware and Its Implications for Digital Privacy and Security,” Karwan Kareem“Stopping the Press: New York Times Journalist Targeted by Saudi-linked Pegasus Spyware Operator,” by Bill Marczak, Siena Anstis, Masashi Crete-Nishihata, John Scott-Railton, and Ron Deibert

Jan 28, 2025 • 50min
A Computer Scientist Answers Your Questions About AI
We’ve spent a lot of time on this show talking about AI: how it’s changing war, how your doctor might be using it, and whether or not chatbots are curing, or exacerbating, loneliness.But what we haven’t done on this show is try to explain how AI actually works. So this seemed like as good a time as any to ask our listeners if they had any burning questions about AI. And it turns out you did.Where do our queries go once they’ve been fed into ChatGPT? What are the justifications for using a chatbot that may have been trained on plagiarized material? And why do we even need AI in the first place?To help answer your questions, we are joined by Derek Ruths, a Professor of Computer Science at McGill University, and the best person I know at helping people (including myself) understand artificial intelligence.Further Reading:“Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building It Anyway,” Machines Like Us podcast“ChatGPT is blurring the lines between what it means to communicate with a machine and a human,” by Derek Ruths“A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Wooldridge“Artificial Intelligence: A Guide for Thinking Humans,” by Melanie Mitchell“Anatomy of an AI System,” by Kate Crawford and Vladan Joler“Two years after the launch of ChatGPT, how has generative AI helped businesses?,” by Joe Castaldo

Jan 20, 2025 • 1min
Questions About AI? We Want to Hear Them
We spend a lot of time talking about AI on this show: how we should govern it, the ideologies of the people making it, and the ways it's reshaping our lives.But before we barrel into a year where I think AI will be everywhere, we thought this might be a good moment to step back and ask an important question: what exactly is AI? On our next episode, we'll be joined by Derek Ruths, a Professor of Computer Science at McGill University.And he's given me permission to ask him anything and everything about AI. If you have questions about AI, or how its impacting your life, we want to hear them. Send an email or a voice recording to: machineslikeus@paradigms.techThanks – and we’ll see you next Tuesday!

Jan 14, 2025 • 49min
This Mother Says a Chatbot Led to Her Son’s Death
In February, 2024, Megan Garcia’s 14-year-old son Sewell took his own life.As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son’s death.They’ve also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google.I sat down with Megan Garcia and her lawyer, Meetali Jain, to talk about what happened to Sewell. And to try to understand the broader implications of a world where chatbots are becoming a part of our lives – and the lives of our children. We reached out to Character.AI and Google about this story. Google did not respond to our request for comment by publication time.A spokesperson for Character.AI made the following statement:“We do not comment on pending litigation.Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we have launched a separate model for our teen users – with specific safety features that place more conservative limits on responses from the model.The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. Conversations with Characters are driven by a proprietary model we continuously update and refine. For users under 18, we serve a version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative – combined with the other techniques described below – combine to produce two distinct user experiences on the Character.AI platform: one for teens and one for adults.Additional ways we have integrated safety across our platform include:Model Outputs: A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model’s responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.User Inputs: While much of our focus is on the model’s output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI.This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user’s conversation with the Character. We also have a process in place to suspend teens from accessing Character.AI if they repeatedly try to input prompts into the platform that violate our content policies.Additionally, under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.We have also added a time spent notification and prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. As we continue to invest in the platform, we will be rolling out several new features, including parental controls. For more information on these new features, please refer to the Character.AI blog HERE.There is no ongoing relationship between Google and Character.AI. In August, 2024, Character.AI completed a one-time licensing of its technology and Noam went back to Google.” If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada’s national suicide prevention helpline. Mentioned:Megan Garcia v. Character Technologies, Et Al.“Google Paid $2.7 Billion to Bring Back an AI Genius Who Quit in Frustration” by Miles Kruppa and Lauren Thomas“Belgian man dies by suicide following exchanges with chatbot,” by Lauren Walker“Can AI Companions Cure Loneliness?,” Machines Like Us“An AI companion suggested he kill his parents. Now his mom is suing,” by Nitasha TikuFurther Reading:“Can A.I. Be Blamed for a Teen’s Suicide?” by Kevin Roose“Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI,” Machines Like Us

Dec 31, 2024 • 27min
Bonus ‘The Decibel’: How an algorithm missed a deadly listeria outbreak
In July, there was a recall on two brands of plant-based milks, Silk and Great Value, after a listeria outbreak that led to at least 20 illnesses and three deaths. Public health officials determined the same strain of listeria had been making people sick for almost a year. When Globe reporters began looking into what happened, they found a surprising fact: the facility that the bacteria was traced to had not been inspected for listeria in years.The reporters learned that in 2019 the Canadian Food Inspection Agency introduced a new system that relies on an algorithm to prioritize sites for inspectors to visit. Investigative reporters Grant Robertson and Kathryn Blaze Baum talk about why this new system of tracking was created, and what went wrong.

Dec 17, 2024 • 36min
AI Has Mastered Chess, Poker and Go. So Why Do We Keep Playing?
The board game Go has more possible board configurations than there are atoms in the universe.Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.In 2016, researchers at Google’s DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.He wasn’t alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He’s spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.Mentioned:“AlphaGo”“The Beauty of Games” by Frank Lantz“Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern“Heads-up limit hold’em poker is solved” by Michael Bowling Et al.Further Reading:“How to Play a Game” by Frank Lantz“The Afterlife of Go” by Frank Lantz“How A.I. Conquered Poker” by Keith Romer“In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade MetzHey Robot by Frank LantzUniversal Paperclips by Frank Lantz
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.