Machines Like Us

The Globe and Mail
undefined
Dec 2, 2025 • 53min

Could an Alternative AI Save Us From a Bubble?

Over the last couple of years, massive AI investment has largely kept the stock market afloat. Case in point: the so-called Magnificent 7 – tech companies like NVIDIA, Meta, and Microsoft – now account for more than a third of the S&P 500’s value. (Which means they likely represent a significant share of your investment portfolio or pension fund, too.)There’s little doubt we’re living through an AI economy. But many economists worry there may be trouble ahead. They see companies like OpenAI – valued at half a trillion dollars while losing billions every month – and fear the AI sector looks a lot like a bubble. Because right now, venture capitalists aren’t investing in sound business plans. They’re betting that one day, one of these companies will build artificial general intelligence.Gary Marcus is skeptical. He’s a professor emeritus at NYU, a bestselling author, and the founder of two AI companies – one of which was acquired by Uber. For more than two decades, he’s been arguing that large language models (LLMs) – the technology underpinning ChatGPT, Claude, and Gemini – just aren’t that good.Marcus believes that if we’re going to build artificial general intelligence, we need to ditch LLMs and go back to the drawing board. (He thinks something called “neurosymbolic AI” could be the way forward.)But if Marcus is right – if AI is a bubble and it’s about to pop – what happens to the economy then?Mentioned:The GenAI Divide: State of AI in Business 2025, by Project Nanda (MIT)MIT study finds AI can already replace 11.7% of U.S. workforce, by MacKenzie Sigalos (CNBC)The Algebraic Mind, by Gary MarcusWe found what you’re asking ChatGPT about health. A doctor scored its answers, by Geoffrey A. Fowler (The Washington Post) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Nov 18, 2025 • 51min

Can AI Lead Us to the Good Life?

Rutger Bregman, a historian and author known for his engaging works on social change, dives into the intersection of AI and ethics. He explores whether AI can truly lead us to a better life or if it poses existential threats. Bregman argues for Universal Basic Income as a response to job displacement and emphasizes public involvement in tech decisions. Drawing parallels to historical movements, he discusses the moral responsibilities of society in shaping AI's future. Can we use technology to create a more equitable world? Bregman believes it’s possible, but the path is fraught with challenges.
undefined
Nov 4, 2025 • 50min

How to Survive the “Broligarchy”

Carole Cadwalladr, an investigative journalist renowned for exposing the Cambridge Analytica scandal, discusses the rise of techno-authoritarianism and the alarming influence of Big Tech on democracy. She explores the failures of regulation post-2016 and how tech giants use data architecture for surveillance. The conversation highlights the gender dynamics in tech leadership, the implications of AI on journalism and labor, and the urgent need for public action to defend democratic values against tech consolidation.
undefined
12 snips
Oct 21, 2025 • 1h 3min

AI Music is Everywhere. Is it Legal?

Ed Newton Rex, a classical composer and former Stability AI music team lead, dives into the controversial world of AI-generated music. He argues that these creations often mirror existing art, blurring the lines of copyright and creativity. Ed discusses the legality of training AI on copyrighted works, labeling it as theft, and emphasizes the need for fair compensation through licensing. He warns of the broader cultural impact if AI takes over art and advocates for a new humanist movement to preserve authentic creativity.
undefined
39 snips
Oct 7, 2025 • 1h 9min

Geoffrey Hinton vs. The End of the World

Geoffrey Hinton, the 'godfather of AI' and a neural network pioneer, shares his profound concerns about the existential risks of artificial intelligence. He discusses how large language models have accelerated his fears of AI consciousness and potential misalignment. Hinton warns that competition fuels rapid development, often sidelining safety. He proposes that future AI should embody 'maternal' care for humanity to ensure safety. Ultimately, he emphasizes the crucial need for public education and collective efforts to manage the future of AI.
undefined
Sep 23, 2025 • 50min

AI is Upending Higher Education. Is That a Bad Thing?

Just two months after ChatGPT was launched in 2022, a survey found that 90 per cent of college students were already using it. I’d be shocked if that number wasn’t closer to 100 per cent by now.Students aren’t just using artificial intelligence to write their essays. They’re using it to generate ideas, conduct research, and summarize their readings. In other words: they’re using it to think for them. Or, as New York Magazine recently put it: “everyone is cheating their way through college.”University administrators seem paralyzed in the face of this. Some worry that if we ban tools like ChatGPT, we may leave students unprepared for a world where everyone is already using them. But others think that if we go all in on AI, we could end up with a generation capable of producing work – but not necessarily original thought.I’m honestly not sure which camp I fall into, so I wanted to talk to two people with very different perspectives on this.Conor Grennan is the Chief AI Architect at NYU’s Stern School of Business, where he’s helping students and educators embrace AI. And Niall Ferguson is a senior fellow at Stanford and Harvard, and the co-founder of the University of Austin. Lately, he’s been making the opposite argument: that if universities are to survive, they largely need to ban AI from the classroom. Whichever path we take, the consequences will be profound. Because this isn’t just about how we teach and how we learn – it’s about the future of how we think.Mentioned:AI’s great brain robbery – and how universities can fight back, by Niall Ferguson (The London Times)Everyone Is Cheating Their Way Through College, by James D. Walsh (New York Magazine)Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, by Nataliya Kos’myna (MIT Media Lab)The Diamond Age, by Neal StephensonHow the Enlightenment Ends, by Henry A. KissingerMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Host direction by Athena Karkanis. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at the Globe & Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
4 snips
Apr 22, 2025 • 1h 9min

Jim Balsillie: ‘Canada’s Problem Isn’t Trump. Canada’s Problem Is Canada’

Jim Balsillie, former co-CEO of Research in Motion and a prominent Canadian business figure, discusses Canada's pressing economic issues and its fraught relationship with the U.S. He critiques the outdated economic model that has left Canada lagging in productivity and wealth, urging for a reevaluation of policies to boost innovation and self-sufficiency. Balsillie emphasizes the need for Canada to prioritize domestic growth, addressing corporate influence and advocating for enhanced transparency and civic engagement in politics.
undefined
Apr 8, 2025 • 39min

The Changing Face of Election Interference

We’re a few weeks into a federal election that is currently too close to call. And while most Canadians are wondering who our next Prime Minister will be, my guests today are preoccupied with a different question: will this election be free and fair?In her recent report on foreign interference, Justice Marie-Josée Hogue wrote that “information manipulation poses the single biggest risk to our democracy”. Meanwhile, senior Canadian intelligence officials are predicting that India, China, Pakistan and Russia will all attempt to influence the outcome of this election. To try and get a sense of what we’re up against, I wanted to get two different perspectives on this. My colleague Aengus Bridgman is the Director of the Media Ecosystem Observatory, a project that we run together at McGill University, and Nina Jankocwicz is the co-founder and CEO of the American Sunlight Project. Together, they are two of the leading authorities on the problem of information manipulation.Mentioned:“Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions,” by the Honourable Marie-Josée Hogue"A Pro-Russia Content Network Foreshadows the Automated Future of Info Ops,” by the American Sunlight ProjectFurther Reading:“Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate,” by Victor Goury-Laffont (Politico)“2025 Federal Election Monitoring and Response,” by the Canadian Digital Media Research Network“Election threats watchdog detects Beijing effort to influence Chinese Canadians on Carney,” by Steven Chase (Globe & Mail)“The revelations and events that led to the foreign-interference inquiry,” by Steven Chase and Robert Fife (Globe & Mail)“Foreign interference inquiry finds ‘problematic’ conduct,” by The Decibel Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Mar 25, 2025 • 37min

How Do You Report the News in a Post-Truth World?

If you’re having a conversation about the state of journalism, it’s bound to get a little depressing. Since 2008, more than 250 local news outlets have closed down in Canada. The U.S. has lost a third of the newspapers they had in 2005. But this is about more than a failing business model. Only 31 percent of Americans say they trust the media. In Canada, that number is a little bit better – but only a little. The problem is not just that people are losing their faith in journalism. It’s that they’re starting to place their trust in other, often more dubious sources of information: TikTok influencers, Elon Musk’s X feed, and The Joe Rogan Experience. The impact of this shift can be seen almost everywhere you look. 15 percent of Americans believe climate change is a hoax. 30 percent believe the 2020 election was stolen. 10 percent believe the earth is flat. A lot of this can be blamed on social media, which crippled journalism's business model and led to a flourishing of false information online. But not all of it. People like Jay Rosen have long argued that journalists themselves are at least partly responsible for the post-truth moment we now find ourselves in. Rosen is a professor of journalism at NYU who’s been studying, critiquing, and really shaping, the press for nearly 40 years. He joined me a couple of weeks ago at the Attention conference in Montreal to explain how we got to this place – and where we might go from here. A note: we recorded this interview before the Canadian election was called, so we don’t touch on it here. But over the course of the next month, the integrity of our information ecosystem will face an inordinate amount of stress, and conversations like this one will be more important than ever.  Mentioned:"Digital News Report Canada 2024 Data: An Overview," by Colette Brin, Sébastien Charlton, Rémi Palisser, Florence Marquis "America’s News Influencers,"  by Galen Stocking, Luxuan Wang, Michael Lipka, Katerina Eva Matsa,Regina Widjaya,Emily Tomasik andJacob LiedkeFurther Reading: "Challenges of Journalist Verification in the Digital Age on Society: A Thematic Review," Melinda Baharom, Akmar Hayati Ahmad Ghazali, Abdul Muati, Zamri Ahmad"Making Newsworthy News: The Integral Role of Creativity and Verification in the Human Information Behavior that Drives News Story Creation," Marisela Gutierrez Lopez, Stephann Makri, Andrew MacFarlane, Colin Porlezza, Glenda Cooper, Sondess Missaoui"The Trump Administration and the Media (2020)," by Leonard Downie Jr. for the Committee to Protect Journalists. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Mar 11, 2025 • 40min

A Chinese Company Upended OpenAI. We May Be Looking at the Story All Wrong.

When the American company OpenAI released ChatGPT, it was the first time that a lot of people had ever interacted with Generative AI. ChatGPT has become so popular that, for many, it’s now synonymous with artificial intelligence.But that may be changing. Earlier this year a Chinese startup called DeepSeek launched its own AI chatbot, sending shockwaves across Silicon Valley. According to DeepSeek, their model – DeepSeek-R1 – is just as powerful as ChatGPT but was developed at a fraction of the cost. In other words, this isn’t just a new company, it could be an entirely different approach to building artificial intelligence.To try and understand what DeepSeek means for the future of AI, and for American innovation, I wanted to speak with Karen Hao. Hao was the first reporter to ever write a profile on OpenAI and has covered AI for The MIT Tech Review, The Atlantic and the Wall Street Journal. So she’s better positioned than almost anyone to try and make sense of this seemingly monumental shift in the landscape of artificial intelligence.Mentioned:“The messy, secretive reality behind OpenAI’s bid to save the world,” by Karen HaoFurther Reading:“DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” by DeepSeek-AI and others.“A Comparison of DeepSeek and Other LLMs,” by Tianchen Gao, Jiashun Jin, Zheng Tracy Ke, Gabriel Moryoussef“Technical Report: Analyzing DeepSeek-R1′s Impact on AI Development,” by Azizi Othman Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app