Digital Disruption with Geoff Nielson

Info-Tech Research Group
undefined
Sep 8, 2025 • 1h 6min

The Lazy Generation? Is AI Killing Jobs or Critical Thinking

Can automation and critical thinking coexist in the future of education and work?Today on Digital Disruption, we’re joined by Bryan Walsh the Senior Editorial Director at Vox.At Vox, Bryan leads the Future Perfect and climate teams and oversees the podcasts Unexplainable and The Gray Area. He also serves as editor of Vox’s Future Perfect section, which explores the policies, people, and ideas that could shape a better future for everyone. He is the author of End Times: A Brief Guide to the End of the World (2019), a book on existential risks including AI, pandemics, and nuclear war though, as he notes, it’s not all that brief. Before joining Vox, Bryan spent 15 years at Time magazine as a foreign correspondent in Hong Kong and Tokyo, an environment writer, and international editor. He later served as Future Correspondent at Axios. When he’s not editing, Bryan writes Vox’s Good News newsletter and covers topics ranging from population trends and scientific progress to climate change, artificial intelligence, and on occasion children’s television.Bryan sits down with Geoff to discuss how artificial intelligence is transforming the workplace and what it means for workers, students, and leaders. From the automation of entry-level jobs to the growing importance of human-centered skills, Bryan shares his perspective on the short- and long-term impact of AI on the economy and society. He explains why younger workers may be hit hardest, how education systems must adapt to preserve critical thinking, and why both companies and governments face tough choices in managing disruption. This conversation highlights why adaptability and critical thinking are becoming the most valuable skills and what governments and organizations can do to reduce the social and economic strain of rapid automation.In this video:00:00 Intro 01:20 Early adoption of AI: Hype vs. reality02:16 Automation pressures during economic downturns03:08 The struggle for new grads entering the workforce04:37 Is AI wiping out entry-level jobs?05:40 Why younger workers may be hit hardest06:28 No clear answers on AI disruption08:19 The paradox of AI: productivity gains vs. job losses14:30 Critical thinking, education, and the future of learning18:00 How AI reshapes global power dynamics31:57 The workplace of the future: skills that matter most44:03 Regulation, politics, and the AI economy48:19 AI, geopolitics, and risks of global instability57:33 Who bears responsibility for minimizing disruption?59:01 Rethinking identity beyond work1:00:22 Journalism in the AI era: threat or amplifier?Connect with Bryan:Website: https://www.vox.com/authors/bryan-walshLinkedIn: https://www.linkedin.com/in/bryan-walsh-9881b0/X: https://x.com/bryanrwalshVisit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG
undefined
Sep 1, 2025 • 57min

From Dumb to Dangerous: The AI Bubble Is Worse Than Ever

Are we heading toward an AI-driven utopia, or just another tech bubble waiting to burst? Today on Digital Disruption, we’re joined by Dr. Emily Bender and Dr. Alex Hanna. Dr. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural Time 100 list of the most influential people in AI. She is frequently consulted by policymakers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies. Dr. Hanna is Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in the Washington Post, Financial Times, The Atlantic, and Time. Dr. Bender and Dr. Hanna sit down with Geoff to discuss the realities of generative AI, big tech power, and the hidden costs of today’s AI boom. Artificial Intelligence is everywhere but how much of the hype is real, and what’s being left out of the conversation? This discussion dives into the social and ethical impacts of AI systems and why popular AI narratives often miss the mark. Dr. Bender and Dr. Hanna share their thoughts on the biggest myths about generative AI and why we need to challenge them and the importance of diversity, labor, and accountability in AI development. They’ll answer questions such as where AI is really heading and how we can imagine better, more equitable futures and what technologists should be focusing on today. In this video:0:00 Intro1:45 Why language matters when we talk about “AI”4:20 The problem with calling everything “intelligence”7:15 How AI hype shapes public perception10:05 Separating science from marketing spin13:30 The myth of AGI: Why it’s a distraction16:55 Who benefits from AI hype?20:20 Real-world harms: Bias, surveillance & labor exploitation24:10 How data is extracted & who pays the price28:40 The invisible labor behind AI systems32:15 Diversity, power, and accountability in AI36:00 Why focusing on “doom scenarios” misses the point39:30 AI in business and risks leaders should actually care about43:05 What policymakers should prioritize now47:20 The role of regulation in responsible AI50:10 Building systems that serve people, not profit53:15 Advice for CIOs and tech leaders55:20 Gen AI in the workplaceConnect with Dr. Bender and Dr. HannaWebsite: https://thecon.ai/authors/Dr. Bender LinkedIn: https://www.linkedin.com/in/ebender/Dr. Hanna LinkedIn: https://www.linkedin.com/in/alex-hanna-ph-d/Visit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG
undefined
Aug 25, 2025 • 58min

Siri Creator: How Apple & Google Got AI Wrong

What does the future of AI assistants look like and what’s still missing?Today on Digital Disruption, we’re joined by Adam Cheyer, Co-Founder of Siri.Adam is an inventor, entrepreneur, engineering executive, and a pioneer in AI and computer human interfaces. He co-founded or was a founding member of five successful startups: Siri (sold to Apple, where he led server-side engineering and AI for Siri), Change.org (the world’s largest petition platform), Viv Labs (acquired by Samsung, where he led product engineering and developer relations for Bixby), Sentient (massively distributed machine learning), and GamePlanner.AI (acquired by Airbnb, where he served as VP of AI Experience). Adam has authored more than 60 publications and 50 patents. He graduated with highest honors from Brandeis University and received the “Outstanding Masters Student” award from UCLA’s School of Engineering.Adam sits down with Geoff to discuss the evolution of conversational AI, design principles for next-generation technology, and the future of human–machine interaction. They explore the future of AI, augmented reality, and collective intelligence. Adam shares insider stories about building Siri, working with Steve Jobs, and why today’s generative AI tools like ChatGPT are both amazing and frustrating. Adam also shares his predictions for the next big technological leap and how collective intelligence could transform how we solve humanity’s most difficult challenges. In this video:0:00 Intro1:08 Why today’s AI both amazes and frustrates3:50 The 3 big missing pieces in current AI systems8:28 What Siri got right and what it missed11:30 The “10+ Theory”: Paradigm shifts in computing14:18 Augmented Reality as the next big breakthrough19:43 Design lessons from building Siri25:00 Iteration vs. first impressions: How to launch transformational products30:20 Beginner, intermediate, and expert user experiences in AI33:40 Will conversational AI become like “Her”?35:45 AI maturity compared to the early internet37:34 Magic, technology, and creating “wow” moments43:55 What’s hype vs. what’s real in AI today47:01 Where the next magic will happen: AR & collective intelligence50:51 The role of DARPA, Stanford, and government funding in Siri’s success54:49 Advice for leaders building the future of digital products57:13 Balance the hypeConnect with Adam:Website: http://adam.cheyer.com/site/home?page=aboutLinkedIn: https://www.linkedin.com/in/adamcheyer/Facebook: https://www.facebook.com/acheyerVisit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRGCheck out other episodes of Digital Disruption: https://youtube.com/playlist?list=PLIImliNP0zfxRA1X67AhPDJmlYWcFfhDT&feature=shared
undefined
Aug 18, 2025 • 51min

Next-Gen Tech Expert: This is AI's ENDGAME

Are we ready for a future where human and machine intelligence are inseparable? Today on Digital Disruption, we’re joined by best-selling author and founding partner of digital strategy firm, Future Point of View (FPOV), Scott Klososky . Scott’s career has been built at the intersection of technology and humanity; he is known for his visionary insights into how emerging technologies shape organizations and society. He has advised leaders across Fortune 500 companies, nonprofits, and professional associations, guiding them in integrating technology with strategic human effort. A sought-after speaker and author of four books—including Did God Create the Internet? Scott continues to help executives around the world prepare for the digital future. Scott sits down with Geoff to discuss the cutting edge of human-technology integration and the emergence of the "organizational mind." What happens when AI no longer supports organizations but becomes a synthetic layer of intelligence within them? He talks about real-world examples of this transformation already taking place, reveals the ethical and existential risks AI poses, and offers practical advice for business and tech leaders navigating this new era. This conversation dives deep into autonomous decision-making to AI regulation and digital governance, and Scott breaks down the real threats of digital reputational damage, AI misuse, and the growing surveillance culture we’re all a part of. In this episode:00:00 Intro00:24 What is an ‘Organizational Mind?’03:44 How fast is this becoming real?05:00 Early insights from building an organizational mind07:02 The human brain analogy: AI mirrors us08:12 What does it mean for AI to “wake up”?09:51 AI awakening without consciousness11:03 Should we be worried about conscious AI?11:59 Accidents, bad actors, and manipulation15:42 Can we prevent these AI risks?18:28 Regulatory control and the role of governments20:03 Cat and Mouse: Can AI hide from auditors?23:02 The escalating complexity of AI threats27:00 Will nations have organizational minds?29:12 Autonomous collaboration between AI nations35:36 Bringing AI tools together36:31 Knowledge, agents, personas & oversight40:11 Why early adopters will have the edge41:00 Are we in another AI bubble?45:01 Scott’s advice for business & tech leaders47:12 Why use-cases alone aren’t enoughConnect with Scott:LinkedIn: https://www.linkedin.com/in/scottklososky/X: https://x.com/sklososky Visit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG
undefined
Aug 11, 2025 • 1h 13min

Roman Yampolskiy: How Superintelligent AI Could Destroy Us All

Is this a wake-up call for anyone who believes the dangers of AI are exaggerated?Today on Digital Disruption, we’re joined by Roman Yampolskiy, a leading writer and thinker on AI safety, and associate professor at the University of Louisville. He was recently featured on podcasts such as PowerfulJRE by Joe Rogan.Roman is a leading voice in the field of Artificial Intelligence Safety and Security. He is the author of several influential books, including AI: Unexplainable, Unpredictable, Uncontrollable. His research focuses on the critical risks and challenges posed by advanced AI systems. A tenured professor in the Department of Computer Science and Engineering at the University of Louisville, he also serves as the founding director of the Cyber Security Lab.Roman sits down with Geoff to discuss one of the most pressing issues of our time: the existential risks posed by AI and superintelligence. He shares his prediction that AI could lead to the extinction of humanity within the next century. They dive into the complexities of this issue, exploring the potential dangers that could arise from both AI’s malevolent use and its autonomous actions. Roman highlights the difference between AI as a tool and as a sentient agent, explaining how superintelligent AI could outsmart human efforts to control it, leading to catastrophic consequences. The conversation challenges the optimism of many in the tech world and advocates for a more cautious, thoughtful approach to AI development.In this episode:00:00 Intro00:45 Dr. Yampolskiy's prediction: AI extinction risk02:15 Analyzing the odds of survival04:00 Malevolent use of AI and superintelligence06:00 Accidental vs. deliberate AI destruction08:10 The dangers of uncontrolled AI10:00 The role of optimism in AI development12:00 The need for self-interest to slow down AI development15:00 Narrow AI vs. Superintelligence18:30 Economic and job displacement due to AI22:00 Global competition and AI arms race25:00 AI’s role in war and suffering30:00 Can we control AI through ethical governance?35:00 The singularity and human extinction40:00 Superintelligence: How close are we?45:00 Consciousness in AI50:00 The difficulty of programming suffering in AI55:00 Dr. Yampolskiy’s approach to AI safety58:00 Thoughts on AI riskConnect with Roman:Website: https://www.romanyampolskiy.com/LinkedIn: https://www.linkedin.com/in/romanyam/X: https://x.com/romanyamVisit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG
undefined
15 snips
Aug 4, 2025 • 35min

Ex-OpenAI Lead Zack Kass Reveals the Societal Impact of AI

Join Zack Kass, AI futurist and former Head of Go-To-Market at OpenAI, as he navigates the complex societal landscape influenced by AI. He discusses the philosophical implications of AI, from its role in global conflicts to its potential to empower bad actors. The conversation touches on the need for ethical frameworks in AI, the importance of community and nature in a tech-driven world, and how emerging generations like Gen Z and Gen Alpha can innovate for positive change. It's a thought-provoking journey into the future of humanity and technology.
undefined
Jul 28, 2025 • 1h 12min

Pulitzer-Winning Journalist: This is Why Big Tech is Betting $300 Billion on AI

In this engaging discussion, Pulitzer Prize-winning journalist Gary Rivlin shares insights about Big Tech's unchecked power. He highlights the evolving role of AI as a political force and delves into the consequences of surveillance, misinformation, and election interference. Rivlin addresses venture capital's impact on the tech landscape and the urgent need for transparent regulations to ensure ethical responsibility. His analysis offers a compelling look at how AI could either democratize opportunities or exacerbate inequality, especially within the realms of journalism and healthcare.
undefined
Jul 21, 2025 • 1h 4min

Ex-CIA Cyber Chief: Here's What Keeps Me Up at Night

In a world of rising cyber threats, what keeps the CIA’s former head of cybersecurity up at night?Today on Digital Disruption, we’re joined by Andy Boyd, former Head of the CIA’s Center for Cyber Intelligence.Andy was a Senior Intelligence Service officer in the Central Intelligence Agency’s Directorate of Operations (DO). His most recent assignment was Director of the CIA’s Center for Cyber Intelligence (CCI) which is responsible for intelligence collection, analysis, and operations focused on foreign cyber threats to US interests. Andy has experience leading worldwide intelligence operations and has in-depth knowledge of geopolitics, cyber operations, security practices, and risk mitigation.Andy sits down with Geoff to discuss the future of cybersecurity in a rapidly evolving digital world. With decades of experience in cyber intelligence, Andy explains how global threats are evolving, from traditional espionage to AI-driven cyberattacks and disinformation. He dives into how intelligence agencies like the CIA assess and respond to state-sponsored cyber threats from China and Russia, and why the private sector is now a primary target. Andy breaks down how emerging technologies like generative AI are changing both offensive and defensive cyber strategies, and what this means for governments, businesses, and people. Andy also shares how one of the world’s leading professional services firms is navigating this new landscape, using culture, data, and innovation to stay ahead of cyber risks. In this episode:00:00 Intro02:45 What the CIA's Cyber Intelligence Center actually does05:30 Leading transformation across a global enterprise 07:20 Evolution of cyber threats from nation-states08:15 Building trust and transparency with business stakeholders11:10 The critical role of data in decision-making 13:00 How the CIA detects and responds to cyber attacks17:05 Creating a culture of innovation and adaptability17:45 The private sector as a frontline target20:40 How Aon is approaching talent and upskilling 23:10 Offensive cyber operations: how far should the U.S. go?27:30 Key leadership lessons and advice for future CIOs29:50 China's cyber capabilities vs. Russia's tactics35:25 The role of intelligence in election security40:50 Why disinformation is more dangerous than hacking45:30 How AI is transforming cyber espionage50:10 What keeps Andy Boyd up at night54:40 The importance of public awareness and resilienceConnect with Andy:LinkedIn: https://www.linkedin.com/in/andrew-g-boyd-12194673/Visit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG
undefined
Jul 14, 2025 • 1h 2min

Unlocking the Brain: Tan Le on Neurotech, AI & Human Potential

What if you could control technology using only your thoughts?Today on Digital Disruption, we’re joined by an expert in the space of Brain-Computer Interfaces (BCIs), Tan Le. Tan is the founder and CEO of EMOTIV, a Silicon Valley-based company pioneering EEG-based BCI technology. Her work centers on non-invasive “brainwear” that enables direct interaction between the human brain and computers. Tan is an advocate for democratizing neurotechnology to empower individuals, researchers, and organizations to drive innovation. In February 2020, she published her first book, The NeuroGeneration: The New Era of Brain Enhancement Revolutionizing the Way We Think, Work and Heal. Tan sits down with Geoff to talk about how her company is making it possible to connect your brain directly to digital systems, no hype, just science. From decoding mental commands to enhancing human cognition, they dive into the ethical challenges of reading brain data, what it really means to give technology access to your mind, and why non-invasive headsets are reshaping human-computer interaction. In this episode:00:00 Intro03:00 Tan Le’s background06:00 What is Brain-Computer Interface (BCI)?09:00 The current state of BCI in 202512:00 Non-invasive vs. implantable tech15:00 How BCIs read brain signals18:00 Real-world applications: Healthcare and beyond21:00 Consumer use cases and accessibility24:00 The role of AI in brain signal interpretation27:00 Ethics of brain data and consent30:00 Mental wellness and performance insights33:00 Government and regulatory perspectives36:00 EMOTIV’s vision and tech stack39:00 Human enhancement and neuroplasticity42:00 Risks and misconceptions around BCI45:00 Collaborations and research partnerships48:00 Global adoption trends51:00 Tan Le’s advice to future innovators54:00 Predictions for the next 10 yearsConnect with Tan:Website: https://www.emotiv.com/LinkedIn: https://www.linkedin.com/in/tanle/X: https://x.com/TanTTLeVisit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG
undefined
Jul 7, 2025 • 45min

Taking Back Your Data: Why the Next Web MUST Protect Digital Freedom

What if your data worked for you and not the platforms controlling it?Today on Digital Disruption, we’re joined by John Bruce, CEO and Co-Founder of Inrupt.With a background as both a founder and an executive at global tech firms, John Bruce is uniquely qualified to help engineer the next phase of the web alongside his co-founder Sir Tim Berners-Lee. He brings to bear decades of successful business leadership and experience creating new markets around innovative software. Prior to partnering with Tim, he was the co-founder and CEO of Resilient, now an IBM company, that developed a new approach to cybersecurity. Through Resilient and four other successful startups, John has experienced first-hand the strategic challenges that the current structure of the web causes for users, developers, and organizations around the world. John Bruce sits down with Geoff Nielson to talk about a future where individuals and not platforms own their data. John shares how AI, consent-driven data sharing, and a decentralized digital wallet called, Charlie could fundamentally reshape how we interact with technology, institutions, and each other. He explains why we must reclaim personal data from tech giants and what “agentic wallets” are and how they work. In this video:0:00 Intro1:25 Rebuilding the Web3:30 From Tim Berners-Lee to today5:10 Data ownership vs. data surveillance7:00 Moving from platforms to people9:15 What Is an Agentic AI wallet? 11:00 Why consent must be baked into AI and data flows13:45 Use cases in healthcare, government & enterprise16:10 “Decentralized” doesn’t mean disorganized18:30 What leaders get wrong about data control20:45 Enterprise integration23:00 The ROI of giving users control of their own data25:30 Why this moment feels like the early days of the web27:00 What’s next for Inrupt, Solid, and the Internet itself29:00 How We rebuild digital trust31:00 Inrupt's vision beyond 203034:00 Partnering with institutions to scale Solid37:00 Global digital identity and governance challenges40:00 Building public trust in data ecosystems43:00 A non-linear view of it allConnect with John:Website: https://www.inrupt.com/aboutLinkedIn: https://www.linkedin.com/in/johnwbruce/Visit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app