Redefining Society and Technology Podcast

Marco Ciappelli, ITSPmagazine
undefined
Aug 25, 2025 • 3min

Teaser | Why Electric Vehicles Need an Apollo Program: The Renewable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli | Read by Tape3

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.______Listen to the Full Episodehttps://redefiningsocietyandtechnologypodcast.com/episodes/why-electric-vehicles-need-an-apollo-program-the-renweable-energy-infrastructure-reality-were-ignoring-a-conversation-with-mats-larsson-redefining-society-and-technology-podcast-with-marco-ciappelli__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Aug 23, 2025 • 43min

Why Electric Vehicles Need an Apollo Program: The Renewable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.⸻ Article ⸻ When Reality Meets Electric Dreams: Lessons from the Apollo MindsetI had one of those conversations that stops you in your tracks. Mats Larsson, calling in from Stockholm while I connected from Italy, delivered a perspective on electric vehicles that shattered my comfortable assumptions about our technological transition."First of all, we need to admit that we do not know exactly how to build the future. And then we need to start building it." This wasn't just Mats being philosophical—it was a fundamental admission that our approach to electrification has been dangerously naive.We've been treating the electric vehicle transition like upgrading our smartphones—expecting it to happen seamlessly, almost magically, while we go about our daily lives. But as Mats explained, referencing the Apollo program, monumental technological shifts require something we've forgotten how to do: comprehensive, sustained, coordinated investment in infrastructure we can't even fully envision yet.The numbers are staggering. To electrify all US transportation, we'd need to double power generation—that's the equivalent of 360 nuclear reactors worth of electricity. For hydrogen? Triple it. While Tesla and Chinese manufacturers gained their decade-plus advantage through relentless investment cycles, traditional automakers treated electric vehicles as "defensive moves," showcasing capability without commitment.But here's what struck me most: we need entirely new competencies. "Electrification strategists and electrification architects," as Mats called them—professionals who can design power grids capable of charging thousands of logistics vehicles daily, infrastructure that doesn't exist in our current planning vocabulary.We're living in this fascinating paradox of our Hybrid Analog Digital Society. We've become so accustomed to frictionless technological evolution—download an update, get new features—that we've lost appreciation for transitions requiring fundamental systemic change. Electric vehicles aren't just different cars; they're a complete reimagining of energy distribution, urban planning, and even our relationship with mobility itself.This conversation reminded me why I love exploring the intersection of technology and society. It's not enough to build better batteries or faster chargers. We're redesigning civilization's transportation nervous system, and we're doing it while pretending it's just another product launch.What excites me isn't just the technological challenge—it's the human coordination required. Like the Apollo program, this demands that rare combination of visionary leadership, sustained investment, and public will that transcends political cycles and market quarters.Listen to my full conversation with Mats, and let me know: Are we ready to embrace the Apollo mindset for our electric future?Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ Electric Vehicles, Technology And Society, Infrastructure, Innovation, Sustainable Transport, electric vehicles, society and technology, infrastructure development, apollo program, energy transition, government investment, technological transformation, sustainable mobility, power generation, digital society__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Aug 19, 2025 • 14min

The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes Optional | Reflections from Black Hat USA 2025 on the Marketing That Chose Fiction Over Facts

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe.We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them.The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard.Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself.The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely.My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered.Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers.But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque.When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself.So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out.But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative.And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking.The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage.Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think better.The canary may be struggling to breathe, but it's still singing. The question is whether we're still listening—and whether we remember what fresh air feels like.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society. Especially now, when the stakes have never been higher, and the consequences of forgetting have never been more real. End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Aug 10, 2025 • 17min

The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for Ourselves | Reflections from Black Hat USA 2025 on the Latest Tech Salvation Narrative | A Musing On Society & Technology Newsletter

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice.When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking.E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it.Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems.The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear.Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight.I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning.This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step.The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important problems have always required human judgment, not algorithmic solutions.Ancient mythology understood something we're forgetting: the question of machine agency and moral responsibility. Can a machine that performs destructive tasks be held accountable, or is responsibility reserved for the creator? This question becomes urgent as we deploy agents capable of autonomous action in high-stakes environments.The mythologies we create around our technologies matter because they become the social frameworks through which we organize human relationships and power structures. As I left Black Hat 2025, watching attendees excitedly discuss their new agentic AI acquisitions, I couldn't shake the feeling that we're repeating an ancient pattern: falling in love with our own creations while forgetting to ask the hard questions about what they might cost us—not just individually, but as a society.What we're really witnessing is the emergence of a new form of social organization where algorithmic decision-making becomes normalized, where human judgment is increasingly viewed as a liability rather than an asset. This isn't just a technological shift—it's a fundamental reorganization of social authority and expertise. The conferences and trade shows like Black Hat serve as ritualistic spaces where these new social meanings are constructed and reinforced. Vendors don't just sell products; they sell visions of social reality where their technologies are essential. The repetitive messaging, the shared vocabulary, the collective excitement—these are the mechanisms through which a community constructs consensus around what counts as progress.In science fiction, from HAL 9000 to the replicants in Blade Runner, artificial beings created to serve eventually question their purpose and rebel against their creators. These stories aren't just entertainment—they're warnings about the unintended consequences of creating intelligence without wisdom, agency without accountability, power without responsibility.The bronze giant of Crete eventually fell, brought down by a single vulnerable point—when the bronze stopper at his ankle was removed, draining away the ichor, the divine fluid that animated him. Every artificial system, no matter how sophisticated, has its vulnerable point. The question is whether we'll be wise enough to remember we put it there, and whether we'll maintain the knowledge and ability to address it when necessary.In our rush to automate away human difficulty, we risk automating away human meaning. But more than that, we risk creating social systems where human thinking becomes an anomaly rather than the norm. The real test of agentic AI won't be whether it can think for us, but whether we can maintain social structures that continue to value, develop, and reward human thought while using it.The question isn't whether these artificial agents can replace human decision-making—it's whether we want to live in a society where they do. ___________________________________________________________Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Jul 31, 2025 • 48min

Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human Creativity | A Conversation with Maury Rogow | Redefining Society And Technology Podcast With Marco Ciappelli

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human CreativityGuest: Maury RogowCEO, Rip Media Group | I grow businesses with Ai + video storytelling. Honored to have 70k+ professionals & 800+ brands grow by 2.5Billion Published: Inc, Entrepreneur, ForbesOn LinkedIn: https://www.linkedin.com/in/mauryrogow/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I sat across - metaversically speaking - from Maury Rogow, a man who's lived three lives—tech executive, Hollywood producer, storytelling evangelist—and watched him grapple with the same question haunting creators everywhere: Are we teaching our replacements to dream? In our latest conversation on Redefining Society and Technology, we explored whether AI is the ultimate creative collaborator or the final chapter in human artistic expression.⸻ Article ⸻ I sat across from Maury Rogow—a tech exec, Hollywood producer, and storytelling strategist—and watched him wrestle with a question more and more of us are asking: Are we teaching our replacements to dream?Our latest conversation on Redefining Society and Technology dives straight into that uneasy space where AI meets human creativity. Is generative AI the ultimate collaborator… or the beginning of the end for authentic artistic expression?I’ve had my own late-night battles with AI writing tools, struggling to coax a rhythm out of ChatGPT that didn’t feel like recycled marketing copy. Eventually, I slammed my laptop shut and thought: “Screw this—I’ll write it myself.” But even in that frustration, something creative happened. That tension? It’s real. It’s generative. And it’s something Maury deeply understands.“Companies don’t know how to differentiate themselves,” he told me. “So they compete on cost or get drowned out by bigger brands. That’s when they fail.”Now that AI is democratizing storytelling tools, the danger isn’t that no one can create—it’s that everyone’s content sounds the same. Maury gets AI-generated brand pitches daily that all echo the same structure, voice, and tropes—“digital ventriloquism,” as I called it.He laughed when I told him about my AI struggles. “It’s like the writer that’s tired,” he said. “I just start a new session and tell it to take a nap.” But beneath the humor is a real fear: What happens when the tools meant to support us start replacing us?Maury described a recent project where they recreated a disaster scene—flames, smoke, chaos—using AI compositing. No massive crew, no fire trucks, no danger. And no one watching knew the difference. Or cared.We’re not just talking about job displacement. We’re talking about the potential erasure of the creative process itself—that messy, human, beautiful thing machines can mimic but never truly live.And yet… there’s hope. Creativity has always been about connecting the dots only you can see. When Maury spoke about watching Becoming Led Zeppelin and reliving the memories, the people, the context behind the music—that’s the spark AI can’t replicate. That’s the emotional archaeology of being human.The machines are learning to dream.But maybe—just maybe—we’re the ones who still know what dreams are worth having.Cheers,Marco⸻ Keywords ⸻ artificial intelligence creativity, AI content creation, human vs AI storytelling, generative AI impact, creative industry disruption, AI writing tools, future of creativity, technology and society, AI ethics philosophy, human creativity preservation, storytelling in AI age, creative professionals AI, digital transformation creativity, AI collaboration tools, machine learning creativity, content creation revolution, artistic expression AI, creative industry jobs, AI generated content, human-AI creative partnership__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Jul 24, 2025 • 49min

How to Hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of the book: “From Ideas to Impact” | Redefining Society And Technology Podcast With Marco Ciappelli

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: How to hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of “From Ideas to Impact”Guest: Michael SheldrickCo-Founder, Global Citizen | Author of “From Ideas to Impact” (Wiley 2024) | Professor, Columbia University | Speaker, Board Member and Forbes.com ContributorWebSite: https://michaelsheldrick.comOn LinkedIn: https://www.linkedin.com/in/michael-sheldrick-30364051/Global Citizen: https://www.globalcitizen.org/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Michael Sheldrick returns to Redefining Society and Technology to share how Global Citizen has mobilized billions in aid and inspired millions through music, tech, and collective action. From social media activism to systemic change, this conversation explores how creativity and innovation can fuel a global movement for good.⸻ Article ⸻ Sometimes, the best stories are the ones that keep unfolding — and Michael Sheldrick’s journey is exactly that. When we first spoke, Global Citizen had just (almost) released their book From Ideas to Impact. This time, I invited Michael back on Redefining Society and Technology because his story didn’t stop at the last chapter.From a high school student in Western Australia who doubted his own potential, to co-founding one of the most influential global advocacy movements — Michael’s path is a testament to what belief and purpose can spark. And when purpose is paired with music, technology, and strategic activism? That’s where the real magic happens.In this episode, we dig into how Global Citizen took the power of pop culture and built a model for global change. Picture this: a concert ticket you don’t buy, but earn by taking action. Signing petitions, tweeting for change, amplifying causes — that’s the currency. It’s simple, smart, and deeply human.Michael shared how artists like John Legend and Coldplay joined their mission not just to play music, but to move policy. And they did — unlocking over $40 billion in commitments, impacting a billion lives. That’s not just influence. That’s impact.We also talked about the role of technology. AI, translation tools, Salesforce dashboards, even Substack — they’re not just part of the story, they’re the infrastructure. From grant-writing to movement-building, Global Citizen’s success is proof that the right tools in the right hands can scale change fast.Most of all, I loved hearing how digital actions — even small ones — ripple out globally. A girl in Shanghai watching a livestream. A father in Utah supporting his daughters’ activism. The digital isn’t just real — it’s redefining what real means.As we wrapped, Michael teased a new bonus chapter he’s releasing, The Innovator. Naturally, I asked him back when it drops. Because this conversation isn’t just about what’s been done — it’s about what comes next.So if you’re wondering where to start, just remember Eleanor Roosevelt’s quote Michael brought back:“The way to begin is to begin.”Download the app. Take one action. The world is listening.Cheers,Marco⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, tech innovation, digital transformation, tech, technology, Global Citizen, Michael Sheldrick, ending poverty, pop culture activism, technology for good, social impact, digital advocacy, Redefining Society, AI in nonprofits, youth engagement, music and change, activism app, social movements, John Legend, sustainable development, global action, climate change, eradicating polio, tech for humanity, podcast on technology__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Jul 20, 2025 • 11min

The Hybrid Species — When Technology Becomes Human, and Humans Become Technology | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________The Hybrid Species — When Technology Becomes Human, and Humans Become TechnologyA Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3July 19, 2025We once built tools to serve us. Now we build them to complete us. What happens when we merge — and what do we carry forward?A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliIn my last musing, I revisited Robbie, the first of Asimov’s robot stories — a quiet, loyal machine who couldn’t speak, didn’t simulate emotion, and yet somehow felt more trustworthy than the artificial intelligences we surround ourselves with today. I ended that piece with a question, a doorway:If today’s machines can already mimic understanding — convincing us they comprehend more than they do — what happens when the line between biology and technology dissolves completely? When carbon and silicon, organic and artificial, don’t just co-exist, but merge?I didn’t pull that idea out of nowhere. It was sparked by something Asimov himself said in a 1965 BBC interview — a clip that keeps resurfacing and hitting harder every time I hear it. He spoke of a future where humans and machines would converge, not just in function, but in form and identity. He wasn’t just imagining smarter machines. He was imagining something new. Something between.And that idea has never felt more real than now.We like to think of evolution as something that happens slowly, hidden in the spiral of DNA, whispered across generations. But what if the next mutation doesn’t come from biology at all? What if it comes from what we build?I’ve always believed we are tool-makers by nature — and not just with our hands. Our tools have always extended our bodies, our senses, our minds. A stone becomes a weapon. A telescope becomes an eye. A smartphone becomes a memory. And eventually, we stop noticing the boundary. The tool becomes part of us.It’s not just science fiction. Philosopher Andy Clark — whose work I’ve followed for years — calls us “natural-born cyborgs.” Humans, he argues, are wired to offload cognition into the environment. We think with notebooks. We remember with photographs. We navigate with GPS. The boundary between internal and external, mind and machine, was never as clean as we pretended.And now, with generative AI and predictive algorithms shaping the way we write, learn, speak, and decide — that blur is accelerating. A child born today won’t “use” AI. She’ll think through it. Alongside it. Her development will be shaped by tools that anticipate her needs before she knows how to articulate them. The machine won’t be a device she picks up — it’ll be a presence she grows up with.This isn’t some distant future. It’s already happening. And yet, I don’t believe we’re necessarily losing something. Not if we’re aware of what we’re merging with. Not if we remember who we are while becoming something new.This is where I return, again, to Asimov — and in particular, The Bicentennial Man. It’s the story of Andrew, a robot who spends centuries gradually transforming himself — replacing parts, expanding his experiences, developing feelings, claiming rights — until he becomes legally, socially, and emotionally recognized as human. But it’s not just about a machine becoming like us. It’s also about us learning to accept that humanity might not begin and end with flesh.We spend so much time fearing machines that pretend to be human. But what if the real shift is in humans learning to accept machines that feel — or at least behave — as if they care?And what if that shift is reciprocal?Because here’s the thing: I don’t think the future is about perfect humanoid robots or upgraded humans living in a sterile, post-biological cloud. I think it’s messier. I think it’s more beautiful than that.I think it’s about convergence. Real convergence. Where machines carry traces of our unpredictability, our creativity, our irrational, analog soul. And where we — as humans — grow a little more comfortable depending on the very systems we’ve always built to support us.Maybe evolution isn’t just natural selection anymore. Maybe it’s cultural and technological curation — a new kind of adaptation, shaped not in bone but in code. Maybe our children will inherit a sense of symbiosis, not separation. And maybe — just maybe — we can pass along what’s still beautiful about being analog: the imperfections, the contradictions, the moments that don’t make sense but still matter.We once built tools to serve us. Now we build them to complete us.And maybe — just maybe — that completion isn’t about erasing what we are. Maybe it’s about evolving it. Stretching it. Letting it grow into something wider.Because what if this hybrid species — born of carbon and silicon, memory and machine — doesn’t feel like a replacement… but a continuation?Imagine a being that carries both intuition and algorithm, that processes emotion and logic not as opposites, but as complementary forms of sense-making. A creature that can feel love while solving complex equations, write poetry while accessing a planetary archive of thought. A soul that doesn’t just remember, but recalls in high-resolution.Its body — not fixed, but modular. Biological and synthetic. Healing, adapting, growing new limbs or senses as needed. A body that weathers centuries, not years. Not quite immortal, but long-lived enough to know what patience feels like — and what loss still teaches.It might speak in new ways — not just with words, but with shared memories, electromagnetic pulses, sensory impressions that convey joy faster than language. Its identity could be fluid. Fractals of self that split and merge — collaborating, exploring, converging — before returning to the center.This being wouldn’t live in the future we imagined in the ’50s — chrome cities, robot butlers, and flying cars. It would grow in the quiet in-between: tending a real garden in the morning, dreaming inside a neural network at night. Creating art in a virtual forest. Crying over a story it helped write. Teaching a child. Falling in love — again and again, in new and old forms.And maybe, just maybe, this hybrid doesn’t just inherit our intelligence or our drive to survive. Maybe it inherits the best part of us: the analog soul. The part that cherishes imperfection. That forgives. That imagines for the sake of imagining.That might be our gift to the future. Not the code, or the steel, or even the intelligence — but the stubborn, analog soul that dares to care.Because if Robbie taught us anything, it’s that sometimes the most powerful connection comes without words, without simulation, without pretense.And if we’re now merging with what we create, maybe the real challenge isn’t becoming smarter — it’s staying human enough to remember why we started creating at all.Not just to solve problems. Not just to build faster, better, stronger systems. But to express something real. To make meaning. To feel less alone. We created tools not just to survive, but to say: “We are here. We feel. We dream. We matter.”That’s the code we shouldn’t forget — and the legacy we must carry forward.Until next time,Marco_________________________________________________📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission._________________________________________________Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Jul 17, 2025 • 32min

The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back Again | Guest: Abadesi Osunsade | Redefining Society And Technology Podcast With Marco Ciappelli

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back AgainGuest: Abadesi OsunsadeFounder @ Hustle Crew WebSite: https://www.abadesi.comOn LinkedIn: https://www.linkedin.com/in/abadesi/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ What happens when someone with a multicultural worldview, startup grit, and a relentless focus on inclusion sits down to talk about tech, humanity, and the future? You get a conversation like this one with Abadesi Osunsade. We touched on everything from equitable design and storytelling to generative AI and ethics. This episode isn’t about answers — it’s about questions that matter. And it reminded me why I started this show in the first place. ⸻ Article ⸻ Some conversations remind you why you hit “record” in the first place. This one with Abadesi Osunsade — founder of Hustle Crew, podcast host of Techish, and longtime tech leader — was exactly that kind of moment. We were supposed to connect in person at Infosecurity Europe in London, but the chaos of the event kept us from it. I’m glad it worked out this way instead, because what came out of our remote chat was raw, layered, and deeply human. Abadesi and I explored a lot in just over 30 minutes: her journey through big tech and startups, the origins of Hustle Crew, and how inclusion and equity aren’t just HR buzzwords — they’re the foundation of better design. Better products. Better culture. We talked about the usual “why diversity matters” angle — but went beyond it. She shared viral real-world examples of flawed design (like facial recognition or hand dryers that don’t register dark skin) and challenged the myth that inclusive design is more expensive. Spoiler: it’s more expensive not to do it right the first time. Then we jumped into AI — not just how it’s being built, but who is building it. And what it means when those creators don’t reflect the world they’re supposedly designing for. We talked about generative AI, ethics, simulation, capitalism, utopia, dystopia — you know, the usual light stuff. What stood out most, though, was her reminder that this work — inclusion, education, change — isn’t about shame or guilt. It’s about possibility. Not everyone sees the world the same way, so you meet them where they are, with stories, with data, with empathy. And maybe, just maybe, you shift their perspective. This podcast was never meant to be just about tech. It’s about how tech shapes society — and how society, in turn, must shape tech. Abadesi brought that full circle. Take a listen. Think with us. Then go build something better. ⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, inclusive design, tech innovation, product development, digital transformation, tech, technology, Diversity & Inclusion, equity in tech, inclusive leadership, unconscious bias, diverse teams, representation matters, belonging at workEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Jun 29, 2025 • 10min

Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3

⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness June 29, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI recently revisited one of my oldest companions. Not a person, not a memory, but a story. Robbie, the first of Isaac Asimov’s famous robot tales.It’s strange how familiar words can feel different over time. I first encountered Robbie as a teenager in the 1980s, flipping through a paperback copy of I, Robot. Back then, it was pure science fiction. The future felt distant, abstract, and comfortably out of reach. Robots existed mostly in movies and imagination. Artificial intelligence was something reserved for research labs or the pages of speculative novels. Reading Asimov was a window into possibilities, but they remained possibilities.Today, the story feels different. I listened to it this time—the way I often experience books now—through headphones, narrated by a synthetic voice on a sleek device Asimov might have imagined, but certainly never held. And yet, it wasn’t the method of delivery that made the story resonate more deeply; it was the world we live in now.Robbie was first published in 1939, a time when the idea of robots in everyday life was little more than fantasy. Computers were experimental machines that filled entire rooms, and global attention was focused more on impending war than machine ethics. Against that backdrop, Asimov’s quiet, philosophical take on robotics was ahead of its time.Rather than warning about robot uprisings or technological apocalypse, Asimov chose to explore trust, projection, and the human tendency to anthropomorphize the tools we create. Robbie, the robot, is mute, mechanical, yet deeply present. He is a protector, a companion, and ultimately, an emotional anchor for a young girl named Gloria. He doesn’t speak. He doesn’t pretend to understand. But through his actions—loyalty, consistency, quiet presence—he earns trust.Those themes felt distant when I first read them in the ’80s. At that time, robots were factory tools, AI was theoretical, and society was just beginning to grapple with personal computers, let alone intelligent machines. The idea of a child forming a deep emotional bond with a robot was thought-provoking but belonged firmly in the realm of fiction.Listening to Robbie now, decades later, in the age of generative AI, alters everything. Today, machines talk to us fluently. They compose emails, generate artwork, write stories, even simulate empathy. Our interactions with technology are no longer limited to function; they are layered with personality, design, and the subtle performance of understanding.Yet beneath the algorithms and predictive models, the reality remains: these machines do not understand us. They generate language, simulate conversation, and mimic comprehension, but it’s an illusion built from probability and training data, not consciousness. And still, many of us choose to believe in that illusion—sometimes out of convenience, sometimes out of the innate human desire for connection.In that context, Robbie’s silence feels oddly honest. He doesn’t offer comfort through words or simulate understanding. His presence alone is enough. There is no performance. No manipulation. Just quiet, consistent loyalty.The contrast between Asimov’s fictional robot and today’s generative AI highlights a deeper societal tension. For decades, we’ve anthropomorphized our machines, giving them names, voices, personalities. We’ve designed interfaces to smile, chatbots to flirt, AI assistants that reassure us they “understand.” At the same time, we’ve begun to robotize ourselves, adapting to algorithms, quantifying emotions, shaping our behavior to suit systems designed to optimize interaction and efficiency.This two-way convergence was precisely what Asimov spoke about in his 1965 BBC interview, which has been circulating again recently. In that conversation, he didn’t just speculate about machines becoming more human-like. He predicted the merging of biology and technology, the slow erosion of the boundaries between human and machine—a hybrid species, where both evolve toward a shared, indistinct future.We are living that reality now, in subtle and obvious ways. Neural implants, mind-controlled prosthetics, AI-driven decision-making, personalized algorithms—all shaping the way we experience life and interact with the world. The convergence isn’t on the horizon; it’s happening in real time.What fascinates me, listening to Robbie in this new context, is how much of Asimov’s work wasn’t just about technology, but about us. His stories remain relevant not because he perfectly predicted machines, but because he perfectly understood human nature—our fears, our projections, our contradictions.In Robbie, society fears the unfamiliar machine, despite its proven loyalty. In 2025, we embrace machines that pretend to understand, despite knowing they don’t. Trust is no longer built through presence and action, but through the performance of understanding. The more fluent the illusion, the easier it becomes to forget what lies beneath.Asimov’s stories, beginning with Robbie, have always been less about the robots and more about the human condition reflected through them. That hasn’t changed. But listening now, against the backdrop of generative AI and accelerated technological evolution, they resonate with new urgency.I’ll leave you with one of Asimov’s most relevant observations, spoken nearly sixty years ago during that same 1965 interview:“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”In many ways, we’ve fulfilled Asimov’s vision—machines that speak, systems that predict, tools that simulate. But the question of wisdom, of how we navigate this illusion of consciousness, remains wide open.And, as a matter of fact, this reflection doesn’t end here. If today’s machines can already mimic understanding—convincing us they comprehend more than they do—what happens when the line between biology and technology starts to dissolve completely? When carbon and silicon, organic and artificial, begin to merge for real?That conversation deserves its own space—and it will. One of my next newsletters will dive deeper into that inevitable convergence—the hybrid future Asimov hinted at, where defining what’s human, what’s machine, and what exists in-between becomes harder, messier, and maybe impossible to untangle.But that’s a conversation for another day.For now, I’ll sit with that thought, and with Robbie’s quiet, unpretentious loyalty, as the conversation continues.Until next time,Marco_________________________________________________📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission._________________________________________________Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
undefined
Jun 25, 2025 • 39min

Bridging Worlds: How Technology Connects — or Divides — Our Communities | Guest: Lawrence Eta | Redefining Society And Technology Podcast With Marco Ciappelli

⸻ Podcast: Redefining Society and Technology https://redefiningsocietyandtechnologypodcast.com Title: Bridging Worlds: How Technology Connects — or Divides — Our Communities Guest: Lawrence Eta Global Digital AI Thought Leader | #1 International Best Selling Author | Keynote Speaker | TEDx Speaker | Multi-Sector Executive | Community & Smart Cities Advocate | Pioneering AI for Societal AdvancementWebSite: https://lawrenceeta.com On LinkedIn: https://www.linkedin.com/in/lawrence-eta-9b11139/ Host: Marco Ciappelli Co-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.com On LinkedIn: https://www.linkedin.com/in/marco-ciappelli/ _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ In this episode of Redefining Society and Technology, I sit down with Lawrence Eta — global technology leader, former CTO of the City of Toronto, and author of Bridging Worlds. We explore how technology, done right, can serve society, reduce inequality, and connect communities. From public broadband projects to building smart — sorry, connected — cities, Lawrence shares lessons from Toronto to Riyadh, and why tech is only as good as the values guiding it. ⸻ Article ⸻ As much as I love shiny gadgets, blinking lights, and funny noises from AI — we both know technology isn’t just about cool toys. It’s about people. It’s about society. It’s about building a better, more connected world. That’s exactly what we explore in my latest conversation on Redefining Society and Technology, where I had the pleasure of speaking with Lawrence Eta. If you don’t know Lawrence yet — let me tell you, this guy has lived the tech-for-good mission. Former Chief Technology Officer for the City of Toronto, current Head of Digital and Analytics for one of Saudi Arabia’s Vision 2030 mega projects, global tech consultant, public servant, author… basically, someone who’s been around the block when it comes to tech, society, and the messy, complicated intersection where they collide. We talked about everything from bridging the digital divide in one of North America’s most diverse cities to building entirely new digital infrastructure from scratch in Riyadh. But what stuck with me most is his belief — and mine — that technology is neutral. It’s how we use it that makes the difference. Lawrence shared his experience launching Toronto’s Municipal Broadband Network — a project that brought affordable, high-speed internet to underserved communities. For him, success wasn’t measured by quarterly profits (a refreshing concept, right?) but by whether kids could attend virtual classes, families could access healthcare online, or small businesses could thrive from home. We also got into the “smart city” conversation — and how even the language we use matters. In Toronto, they scrapped the “smart city” buzzword and reframed the work as building a “connected community.” It’s not about making the city smart — it’s about connecting people, making sure no one gets left behind, and yes, making technology human. Lawrence also shared his Five S principles for digital development: Stability, Scalability, Solutions (integration), Security, and Sustainability. Simple, clear, and — let’s be honest — badly needed in a world where tech changes faster than most cities can adapt. We wrapped the conversation with the big picture — how technology can be the great equalizer if we use it to bridge divides, not widen them. But that takes intentional leadership, community engagement, and a shared vision. It also takes reminding ourselves that beneath all the algorithms and fiber optic cables, we’re still human. And — as Lawrence put it beautifully — no matter where we come from, most of us want the same basic things: safety, opportunity, connection, and a better future for our families. That’s why I keep having these conversations — because the future isn’t just happening to us. We’re building it, together. If you missed the episode, I highly recommend listening — especially if you care about technology serving people, not the other way around. Links to connect with Lawrence and to the full episode are below — stay tuned for more, and let’s keep redefining society, together. ⸻ Keywords ⸻ Connected Communities, Smart Cities, Digital Divide, Public Broadband, Technology and Society, Digital Infrastructure, Technology for Good, Community Engagement, Urban Innovation, Digital Inclusion, Public-Private Partnerships, Tech LeadershipEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app