Conversations on Strategy Podcast

U.S. Army War College Public Affairs
undefined
Jan 23, 2024 • 0sec

Conversations on Strategy Podcast – Ep 31 – COL Richard D. Butler, Josh Arostegui, and Dr. Luke P. Bellocchi – On “The Strategic Importance of Taiwan to the United States and Its Allies”

Taiwan has become increasingly important to the United States and its allies as the Russia-Ukraine War has united democracies against authoritarian expansionism and has developed an international democracy-authoritarianism dynamic in global affairs. Part one of this article clearly outlined the geopolitical, economic, and soft-power reasons why Taiwan is strategically important. Part two reviewed the development of US and allied policy statements on Taiwan and provides policymakers and military strategists with incremental but realistic recommendations for understanding the current dynamic of the region and fashioning responses to deter further authoritarian aggression. E-mail usarmy.carlisle.awc.mbx.parameters@army.mil to give feedback on this podcast or the genesis article. Keywords: Taiwan, China, Russia, Ukraine, National Security Strategy, Biden Read the transcript: https://media.defense.gov/2024/Jan/23/2003379988/-1/-1/0/20240122COS-PODCAST-TRANSCRIPT-BELLOCCHI_BUTLER_AROSTGUI.PDF
undefined
Jan 16, 2024 • 0sec

Conversations on Strategy Podcast – Ep 30 – Dr. Jared M. McKinney and Dr. Peter Harris – Deterrence Gap: Avoiding War in the Taiwan Strait

The likelihood China will attack Taiwan in the next decade is high and will continue to be so, unless Taipei and Washington take urgent steps to restore deterrence across the Taiwan Strait. This podcast introduces the concept of interlocking deterrents, explains why deterrents lose their potency with the passage of time, and provides concrete recommendations for how Taiwan, the United States, and other regional powers can develop multiple, interlocking deterrents that will ensure Taiwanese security in the short and longer terms. By joining deterrence theory with an empirical analysis of Taiwanese, Chinese, and US policies, the podcast provides US military and policy practitioners new insights into ways to deter the People’s Republic of China from invading Taiwan without relying exclusively on the threat of great-power war.E-mail usarmy.carlisle.awc.mbx.parameters@army.mil to give feedback on this podcast or the genesis article.Keywords: Taiwan, China, deterrence, cross-strait relations, Indo-Pacific, East Asia, US foreign policy, international securityDownload the transcript: https://media.defense.gov/2024/Jan/16/2003376954/-1/-1/0/COS-PODCAST-TRANSCRIPT-MCKINNEY_HARRIS.PDF
undefined
Dec 21, 2023 • 0sec

Conversations on Strategy Podcast – Ep 29 – Conrad C. Crane and Brian McAllister Linn – On Today's Recruiting Crisis

Dr. Conrad C. Crane and Dr. Brian McAllister Linn address the Army’s recruiting crisis—especially for combat arms. Talent management was identified as an issue for the Army in 1907 in a General Staff report and continues to be a challenge. The results of the President’s Commission on an All-Volunteer Force in 1970 may have complicated matters further. Read Dr. Crane’s article: https://www.realcleardefense.com/articles/2023/01/28/does_the_all-volunteer_force_have_an_expiration_date_878344.html Read Dr. Linn’s article: https://press.armywarcollege.edu/parameters/vol53/iss3/3/ E-mail usarmy.carlisle.awc.mbx.parameters@army.mil to give feedback on this podcast or the genesis article. Keywords: US Army history, personnel policy, talent management, Army People Strategy, all-volunteer force
undefined
Dec 5, 2023 • 0sec

Conversations on Strategy Podcast – Ep 28 – Mitchell G. Klingenberg – Americans and the Dragon: Lessons in Coalition Warfighting from the Boxer Uprising

Drawing from archival materials at the US Army Heritage and Education Center and the United States Military Academy at West Point, numerous published primary sources, and a range of secondary sources, this monograph offers an overview of the China Relief Expedition from June 1900 to the moment of liberation in August. Its considerations range from the geopolitical to the strategic and down to the tactical levels of war. US forces partnered alongside the combined naval and land forces of multiple nations, thus constituting the first contingency, expeditionary, and multinational coalition in American military history. In the face of numerous obstacles conditioned by enemy forces, the environment, and internal to the informal coalition itself, American forces succeeded in liberating their besieged legation. While the character of war has evolved since 1900, students of war should see through disparities that appear to separate the China Relief Expedition from the historical present. Read the monograph: https://press.armywarcollege.edu/monographs/961/ E-mail usarmy.carlisle.awc.mbx.parameters@army.mil to give feedback on the monograph or the podcast. Keywords: Boxer Uprising, China Relief Expedition, Taku Forts, Empress Dowager Cixi, Qing dynasty
undefined
Nov 22, 2023 • 0sec

Conversations on Strategy Podcast – Ep 27 – COL Eric Hartunian On The Annual Estimate of the Strategic Security Environment

The Annual Estimate of the Strategic Security Environment serves as a guide for academics and practitioners in the defense community on the current challenges and opportunities in the strategic environment. This year’s publication outlines key strategic issues across the four broad themes of Regional Challenges and Opportunities, Domestic Challenges, Institutional Challenges, and Domains Impacting US Strategic Advantage. These themes represent a wide range of topics affecting national security and provide a global assessment of the strategic environment to help focus the defense community on research and publication. Strategic competition with the People’s Republic of China and the implications of Russia’s invasion of Ukraine remain dominant challenges to US national security interests across the globe. However, the evolving security environment also presents new and unconventional threats, such as cyberattacks, terrorism, transnational crime, and the implications of rapid technological advancements in fields such as artificial intelligence. At the same time, the US faces domestic and institutional challenges in the form of recruiting and retention shortfalls in the all-volunteer force, the prospect of contested logistics in large-scale combat operations, and the health of the US Defense Industrial Base. Furthermore, rapidly evolving security landscapes in the Arctic region and the space domain pose unique potential challenges to the Army’s strategic advantage.Read the 2023 Annual Estimate of the Strategic Security Environment: https://press.armywarcollege.edu/monographs/962/ Keywords: Asia, Indo-Pacific, Europe, Middle East, North AfricaFull transcript: https://media.defense.gov/2023/Nov/22/2003346391/-1/-1/0/COS-27-TRANSCRIPT-HARTUNIAN.PDF
undefined
Nov 20, 2023 • 0sec

Conversations on Strategy Podcast – Ep 26 – Christopher J. Bolan, Jerad I. Harper, and Joel R. Hillison – Revisiting “Diverging Interests: US Strategy in the Middle East”

The October 2023 attacks on Israel by Hamas are only the latest in a series of global crises with implications for the regional order in the Middle East. These changes and the diverging interests of actors in the region have implications for US strategy and provide an opportunity to rethink key US relationships there. Read the original article here: https://press.armywarcollege.edu/parameters/vol50/iss4/10/ Download the full episode transcript here: https://media.defense.gov/2023/Nov/21/2003345028/-1/-1/0/COS-26-TRANSCRIPT-BOLAN-HARPER-HILLISON.PDF Keywords: Israel, Hamas, Middle East, Iran, Turkey
undefined
Sep 28, 2023 • 0sec

Conversations on Strategy Podcast – Ep 25 – Dr. Allison Abbe and Dr. Claire Yorke – On Strategic Empathy

This podcast explores the benefits of strategic empathy and its value as a leadership tool. Read the issue here: https://press.armywarcollege.edu/parameters/vol53/iss2/9/Download the transcript: https://media.defense.gov/2023/Oct/10/2003316776/-1/-1/0/COS-PODCAST-TRANSCRIPT-ABBE-YORKE-FINAL.PDFKeywords: strategic empathy, perspective taking, H. R. McMaster, Ralph K. White, Zachary ShoreCONVERSATIONS ON STRATEGY PODCAST – EPISODE TRANSCRIPTDr. Allison Abbe and Dr. Claire Yorke On Strategic Empathy*****Stephanie Crider (Host)You're listening to Conversations on Strategy (http://ssi.armywarcollege.edu/cos). The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.Today I'm talking with Dr. Allison Abbe and Dr. Claire Yorke.Abbe is a professor of organizational studies at the US Army War College and author of “Understanding the Adversary: Strategic Empathy and Perspective Taking in National Security,” (https://press.armywarcollege.edu/parameters/vol53/iss2/9/) which was published in the summer of 2023 issue of Parameters.Yorke is an author, academic, researcher, and advisor. Her expertise is in the role of empathy and emotions in international affairs, politics, leadership, and society.Episode TranscriptDownload the transcript: https://media.defense.gov/2023/Oct/10/2003316776/-1/-1/0/COS-PODCAST-TRANSCRIPT-ABBE-YORKE-FINAL.PDFWelcome to Conversations on Strategy.What is strategic empathy and what is it not?Claire YorkeSo, strategic empathy emphasizes the importance of understanding the other side within strategic decision making, and this might be an adversary. And often a lot of the scholarship focuses on adversaries, but it can also be allies. It can be societies. It's a way of having a deeper awareness of how different people view the world and how that will have a bearing on the calculations and decisions that you make on the implications of strategy when it touches ground, when it reaches contact point with the natural situation in a context, and it encourages us to think more about the context that different people come from, their experiences, their histories, their socioeconomic backgrounds, their perspectives of the world, their cultural context, and also the meanings that they give to different ideas to different values, to different elements of a situation.Allison AbbeAs Claire describes, it's taking the perspective of another party, looking at the situation in their shoes, as they would say, “walking in their shoes.” But it is important to distinguish this from just care and compassion. I think sometimes when people hear the term empathy, they automatically think that it just means compassion for someone else and sympathy for another person. And it's much more than that. It's much more of those cognitive elements of taking their perspective and understanding their context as Claire said.YorkeIt's that idea of having a broader range of emotions and being aware of them. It's not a weakness to have strategic empathy. It's an essential element of dealing with other human beings.AbbeAnd I think that's really one of the keys is that it's purposeful. It's not just thinking about someone else's point of view for its own sake. It's really using that perspective and using their lenses to understand and better make decisions and to better include them in your calculations and your decision making.HostWhat can strategic empathy add to our strategic thinking?YorkeIt can add a number of different elements. Firstly, I think it's a critical asset in creating more strategic humility and understanding that there is not just one world view that dominates and that is integrally, right, intrinsically right. Cultivating that around ourself, there are multiple different competing realities right now as people see them. How do we build that into our approach and our sense of self and our identity? And so, it compels self-reflection in how we make decisions, in how we think about the world and understand, as well, how we are experienced by others. How do our words, how do our actions, how our behaviors have implications and often unintended implications? Maybe our actions are intended to be good, but they're operating against a background where they won't be interpreted in that way. And so, it gives that checking point. It gives a sense of reflection and humility and greater consideration to how we interact.AbbeAnd I think that's exactly right. You know, the military perspective, where I'm coming from and teaching strategic leaders as officers that will be working for combat and commands and in other areas, it's critical to military planning that they consider the impact of their actions and how that will be perceived. Those actions are not going taken... as they plan, sometimes there are unexpected and unintended consequences that the other party, whether that's the local population or the adversary, will not necessarily receive those actions the way they might be intended, and there is huge room for miscalculation when you're talking about understanding the adversary’s perspective. You can go awry in many ways when you're trying to understand the adversary perspective, if you're not taking those different lenses into account.YorkeIt also can contribute an awareness of the need to ask more people to think about who's missing from the table, who's missing from our consideration. Who have we not understood? Who have we not engaged with? And so, especially from that military perspective, how do militaries build in greater engagement with different communities in a respectful, engaged way that doesn't undermine them, that is engaging with them properly in a very considered way to make sure that you have taken in various sources of information, various perspectives, into account? And that should give you a more holistic, a richer picture of the environment you're operating, of the ways in which power may look very different on the ground to how we conceptualize it in theory or in abstract or from remote capitals, and so, fostering greater nuance and complexity within the process.HostWhat limitations does strategic empathy have?AbbeOne potential limitation is the way that empathy in general has often been discussed. It's in terms of taking on the point of view of someone else. And there is some risk in that if you don't then shift back to your own perspective, your own party’s perspective. That's sometimes talked about as going native. I think that's sort of an extreme example of what we're talking about here with the risk of empathy, but the important thing, and what I've talked about in the paper, is that it's perspective taking rather than the broader empathy concept where you're taking on the perspective of another party but then moving back to your own. And so being able to shift among those perspectives is really critical, and I think empathy can be misapplied when it's just a matter of taking on the perspective of the other party and adopting that as your own, then you're not making those distinctions and being able to shift in and out of the lenses that will be important to decision making.YorkeOne of the limitations can be that, especially when you're dealing with a military environment where decisions are often having to be made very quickly in very intense situations, you cannot be constantly processing various different perspectives. Someone at some point has to make a call, and it may be that they get that call wrong. But that is why this emphasis on strategic empathy becomes so important because, in theory, you should have all your information—as much information as possible at the outset—when you're designing the approach, when you're thinking about the strategic calculations involved, so that then when you get to the very intense critical moments, you feel equipped and able to then process what you have to do with as much information and insight as possible.AbbeEmpathy definitely is demanding on your cognitive resources, and you can't always apply it when you're in a time limited or very stressful environment. When those conditions are in place, you really fall back to defaulting to your own perspective. And so there may not be time, unless you've taken those perspectives into account in an earlier planning phase, so that when conditions change, you're ready to apply those perspectives. So, it can be limiting in that you don't always have time to engage in that cognitively demanding process when you need to.YorkeAnd this is a great point, as well, because it emphasizes that one of the limitations of empathy is exactly that it can cause burn out. Having more conversations among practitioners, among policymakers, among military officials, and people involved in the military means that you develop a greater emotional literacy around what it means, what it costs, what it requires. So, then you are more aware of when you reach that overload, when maybe the people who are serving can't keep on trying to understand other perspectives or what are the signs when maybe you've reached that burnout point. And so, being aware of that, that that can be too much. And creating greater literacy is one of the key elements we have to be working on and increasing within strategic thinking.HostWhat is the state of scholarship on strategic empathy now?AbbeI think that scholarship on empathy has really waxed and waned and gone through cycles. Ralph White wrote about realistic empathy decades ago, now. This is essentially a very similar concept of strategic empathy, but then it lost traction in the 90s and disappeared for a while. In the US militar, there was some discussion about empathy and perspective taking early in the Iraq and Afghanistan engagements, but there was a loss of interest, I think, as we've moved to large-scale combat operations and focusing on that. So, I think that it’s revived again, potentially, at least, in the history community from H. R. McMaster and Zachary Shore’s work, but we'll see.It seems like it gets attention for some period and then drops off again and so has not been making huge advances. Although, if you look within specific disciplines, like in psychology, I think there has been some incremental progress in understanding at least how to measure empathy and perspective taking and when in the life-cycle development you start to see those skills emerge. So, there has been progress there.YorkeI share your view. It definitely goes through cycles. And Ralph K. White was one of the earliest people to be talking about this in this space, in the context of conflicts such as Vietnam, and also Iraq. I find it really interesting to see how it's being talked about, especially in the context of things like strategic communications. How is empathy apart of engaging with diverse actors and different audiences within a very complex, constantly moving communications environment?So, we do see some there, as Allison said. In psychology, it's really gaining traction, but often people don't like to use the word empathy because of the connotations that it has as being maybe something a little bit soft, of being all about feelings and compassion, and being a sign of weakness to even countenance another perspective and another point of view. And that's actually exactly what you've got to do, whether you're in the military, whether you're designing foreign policy or domestic policy, and it can be quite an academic intellectual exercise. You are not maybe caring for all the people you're trying to understand in the same way. Some people may be actively hostile to you and you to them, but that is a process we have to get better at talking about. And I think that's in the scholarship I'm seeing and how people use different words, which have a very similar connotation. Often, we can find it hiding, but not using the same, maybe, reference points and definitions.AbbeThat's a question I have for you, Claire. Do you think it's important that we use the term empathy? Because I did find in some of my own experiences, at the 2008 time frame, that the term perspective taking or frame switching tended to be more well received than talking about getting soldiers and officers to learn empathy so that they could be more cross culturally competent. What do you think?YorkeI am of two minds, partly because what we need is more that people practice and are conscious of the importance of this thing that we're calling empathy. Whether you call it perspective taking or understanding an adversary, it's critical that that is something we start to be better at and do more of. I personally have a preference for using the term empathy precisely because I want to try and challenge this dichotomy we have between emotions and reason and the idea that, somehow, reason is the right way to do things, and emotions are irrational and should be dismissed from our calculation.There's so much fascinating research in neuroscience and psychology and other disciplines that show us that reason and motion are intricately interlinked, and they're entwined in how we make judgments. We can't make judgments without understanding how different people feel about situations, how emotions move people, how emotions give meaning to what we value—to what we are willing to risk—by using empathy.For me, it's a way of saying let's get better about talking about feelings, not as something soft and irrational but as sources of judgment and insight. And that, then, can help us have a far broader picture from which to make decision making that is reasoned but informed by judicious understanding of emotion, emotion intelligence, effectively.HostWhat is still missing when it comes to strategic empathy?AbbeOne thing that's missing, as we referenced in talking about the scholarship on strategic empathy, is a consistent focus on including empathy in professional development and understanding it as another skill set where we talk about critical thinking, systems thinking—empathy should be part of that. And I think we have some attention to it now, again, but (we should) be concerned of it dropping off again and disappearing from conversations in professional military education in particular. I think our understanding of how it improves decision making and planning could be better defined. I don't think there's been as much progress there in really showing what difference it makes in planning and decision making to include empathy and understanding other perspectives as compared with when decision makers omit those perspectives.YorkeAs a non-American, I find a lot of the scholarship is quite American focused, as a lot of work by people like Zachary Shore and H. R. McMaster has already been mentioned. And I think there's a huge range of work to be explored that says, what does strategic empathy mean from very different perspectives? Do Europeans do it in the same way? Do the Chinese do strategic empathy in the same way? What about in South Africa or Australia or Nigeria? How do these different countries maybe conceptualize empathy differently? What does that give us as an insight into strategic thinking, into how different people approach adversaries and allies alike, especially when we look at the threat and risk landscape right now and we're dealing with a range of different challenges in the future. Not only conventional military conflict, but also technology. There's going to be challenges with resilience and climate change, among other things.How can we extend our strategic empathy to audiences and people who maybe have not traditionally been included? So, when we talk about climate change, how do we use strategic empathy as a beneficial way to do more effective diplomacy that brings in Pacific Island nations, small island states? How do we use it as a way to have a greater, more constructive dialogue that takes accounts of different people's needs and interests and values and priorities? And I think that's something that we really need to do.AbbeThe recent literature on strategic empathy has really focused on understanding the adversary and, to Claire's point, there needs to be more focus on understanding a broader range of perspectives and actors and, in particular, for the US, and our allies and partners. It's understanding the perspectives within the Allied nations and partner nations to better improve interoperability. We talk about technical, procedural, and human interoperability, and I think empathy can really add to understanding human interoperability at different levels; cultural, interpersonal empathy could be really strong component of that.HostAllison. Claire. Thanks for making time to speak with me today. I really enjoyed it.AbbeThank you, Stephanie. Thank you, Claire.YorkeThank you. It's a real pleasure to join you both.HostIf you enjoyed this episode and would like to hear more, you can find us on any major podcast platform.Additional Resources:NSS Week noontime lecture: Dr. Allison Abbe discusses Strategic Empathy (https://www.youtube.com/watch?v=8HFCYDTO4F4)Articles by Claire Yorke on empathy and strategy: Is Empathy a Strategic Imperative? A Review Essay (https://www.tandfonline.com/doi/full/10.108001402390.2022.2152800) and The Significance and Limitations of Empathy in Strategic Communications. (https://stratcomcoe.org/publications/the-significance-and-limitations-of-empathy-in-strategic-communications/191/
undefined
Sep 28, 2023 • 0sec

Conversations on Strategy Podcast – Ep 24 – Jonathan Klug and Mick Ryan – On White Sun War: The Campaign for Taiwan.mp3

In this podcast, US Army Col. Jon Klug and retired Australian Major General Mick Ryan discuss Ryan’s most recent book, White Sun War: The Campaign for Taiwan, and its potential implications for future warfare. In the summer of 1986, Tom Clancy’s novel Red Storm Rising debuted at number one on the New York Times bestseller list as it brought to life World War III, although a nonnuclear version. Similarly, Retired Australian Major General Mick Ryan’s new novel White Sun War offers a realistic and gripping “historical” account of a war for Taiwan set in 2028. Where Clancy had the Warsaw Pact and NATO, Ryan pits communist China against a coalition of Taiwan, the United States, Australia, Japan, and others. A longtime strategic commentator with 35 years of real-world experience, Ryan’s vision of a war in the near future is firmly grounded. He deftly uses fiction to explore the potential challenges of warfare and leadership in 2028.The book, On White Sun War: The Campaign for Taiwan, is available here: https://www.casematepublishers.com/9781636242507/white-sun-war/Read the transcript: https://media.defense.gov/2023/Oct/18/2003322446/-1/-1/0/CoS-Transcript-Ep24-Klug-Ryan-White-Sun-War-Campaign-for-Taiwan.PDFKeywords: China, Taiwan, NATO, Australia, Japan, Warsaw Pact
undefined
Aug 15, 2023 • 0sec

Conversations on Strategy Podcast – Ep 23 – Anthony Pfaff and Adam Henschke – The Ethics of Trusting AI

Based on the monograph Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge and the Parameters article “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” this episode focuses on the ethics of trusting AI. Who is responsible when something goes wrong? When is it okay for AI to make command decisions? How can humans and machines work together to form more effective teams? These questions and more are explored in this podcast.Read the articles:Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge: https://press.armywarcollege.edu/monographs/959/Parameters article “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming”: https://press.armywarcollege.edu/parameters/vol53/iss1/14Download the full episode transcript here: https://media.defense.gov/2023/Nov/15/2003341255/-1/-1/0/COS-23-PODCAST-TRANSCRIPT-PFAFF_HENSCHKE_TRUSTING-AI.PDFKeywords: artificial intelligence (AI), manned-unmanned teaming, ethical AI, civil-military relations, autonomous weapons systems
undefined
Jun 28, 2023 • 0sec

Conversations on Strategy Podcast – Ep 22 – Paul Scharre and Robert J. Sparrow – AI: Centaurs Versus Minotaurs—Who Is in Charge?

Who is in charge when it comes to AI? People or machines? In this episode, Paul Scharre, author of the books Army of None: Autonomous Weapons and the Future of War and the award-winning Four Battlegrounds: Power in the Age of Artificial Intelligence, and Robert Sparrow, coauthor with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming” that was featured in the Spring 2023 issue of Parameters, discuss AI and its future military implications.Read the article: https://press.armywarcollege.edu/parameters/vol53/iss1/14/Keywords: artificial intelligence (AI), data science, lethal targeting, professional expert knowledge, talent management, ethical AI, civil-military relationsEpisode transcript: AI: Centaurs Versus Minotaurs: Who Is in Charge?Stephanie Crider (Host)The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.You’re listening to Conversations on Strategy.I’m talking with Paul Scharre and Professor Rob Sparrow today. Scharre is the author of Army of None: Autonomous Weapons in the Future of War, and Four Battlegrounds: Power in the Age of Artificial Intelligence. He’s the vice president and director of studies at the Center for a New American Security.Sparrow is co-author with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” which was featured in the Spring 2023 issue of Parameters. Sparrow is a professor in the philosophy program at Monash University, Australia, where he works on ethical issues raised by new technologies.Welcome to Conversations on Strategy. Thanks for being here today.Paul ScharreAbsolutely. Thank you.HostPaul, you talk about centaur warfighting in your work. Rob and Adam re-envisioned that model in their article. What exactly is centaur warfighting?ScharreWell, thanks for asking, and I’m very excited to join this conversation with you and with Rob on this topic. The idea really is that as we see increased capabilities in artificial intelligence and autonomous systems that rather than thinking about machines operating on their own that we should be thinking about humans and machines as part of a joint cognitive system working together. And the metaphor here is the idea of a centaur, the mythical creature of a 1/2 human 1/2 horse, with the human on top—the head and the torso of a human and then the body of a horse. You know, there’s, like, a helpful metaphor to think about combining humans and machines working to solve problems using the best of both human and machine intelligence. That’s the goal.HostRob, you see AI being used differently. What’s your perspective on this topic?Robert SparrowSo, I think it’s absolutely right to be talking about human-machine or manned-unmanned teaming. I do think that we will see teams of artificial intelligence as robots and human beings working and fighting together in the future. I’m less confident that the human being will always be in charge. And I think the image of the ccentaur is kind of reassuring to people working in the military because it says, “Look, you’ll get to do the things that you love and think are most important. You’ll get to be in charge, and you’ll get the robots to do the grunt work.” And, actually, when we look at how human beings and machines collaborate in civilian life, we actually often find it’s the other way around.(It) turns out that machines are quite good at planning and calculating and cognitive skills. They’re very weak at interactions with the physical world. Nowadays, if you, say, ask ChatGPT to write you a set of orders to deploy troops it can probably do a passable job at that just by cannibalizing existing texts online. But if you want a machine to go over there and empty that wastepaper basket, the robot simply can’t do it. So, I think the future of manned-unmanned teaming might actually be computers, with AI systems issuing orders. Or maybe advice that has the moral force of orders are two teams of human beings.Adam and I have proffered the image of the Minotaur, which was the mythical creature with the head of a bull and the body of a man as an alternative to the centaur, when we’re thinking about the future of manned-unmanned teaming.HostPaul, do you care to respond?ScharreI think it’s a great paper and I would encourage people to check it out, “Minotaurs, Not Centaurs.” And it’s a really compelling image. Maybe the humans aren’t on top. Maybe the humans are on the bottom, and we have this other creature that’s making the decisions, and we’re just the body taking the actions. (It’s) kind of creepy, the idea of maybe we’re headed towards this role of minotaurs instead, and we’re just doing the bidding of the machines.You know, a few years ago, I think a lot of people envisioned the type of tasks that AI would be offloading, would be low-skill tasks, particularly for physical labor. So, a lot of the concern was like autonomousness was gonna put truck drivers out of work. It turns out, maneuvering things in the physical world is really hard for machines. And, in fact, we’ve seen with progress in large language models in just the last few years, ChatGPT or the newest version (GPT-4), that they’re quite good at lower-level skills of cognitive labor so that they can do a lot of the tasks that maybe an intern might do in a white-collar job environment, and they’re passable. And as he’s pointing out, ask a robot to throw out a trash basket for you or to make a pot of coffee . . . it’s not any good at doing that. But if you said, “Hey, write a short essay about human-machine teaming in the military environment,” it’s not that bad. And that’s pretty wild.And I think sometimes these models have been criticized . . . people say, “Well, they’re just sort of like shuffling words around.” It’s not. It’s doing more than that. Some of the outputs are just garbage, but (with) some of them, it’s clear that the model does understand, to some extent. It’s always dicey using anthropomorphic terms, but (it can) understand the prompts that you’re giving it, what you’re asking it to do, and can generate output that’s useful. And sometimes it’s vague, but so are people sometimes. And I think that this vision of hey, are we headed towards this world of a minotaur kind of teaming environment is a good concern to raise because presumably that’ not what we want.So then how do we ensure that humans are in charge of the kinds of decisions that we want humans to be responsible for? How do we be intentional about using AI and autonomy, particularly in the military environment?SparrowI would resist the implication that it’s only really ChatGPT that we should be looking at. I mean, in some ways it’s the history of chess or gaming where we should be looking to the fact that machines outperform all, or at least most, human beings. And the question is if you could develop a warfighting machine for command functions then that wouldn’t necessarily have to be able to write nice sentences. The question is when it comes to some of the functions of battlefield command, whether or not machines can outperform human beings in that role. There’s kind of some applications like threat assessment in aerial warfare, for instance, where the tempo of battle is sufficiently high and there’s lots of things whizzing around in the sky, and we’re already at a point where human beings are relying on machines to at least prioritize tasks for them. And I think, increasingly, it will be a brave human being that overrides the machine and says, “The machine has got this wrong.”We don’t need to be looking at explicit hierarchies or acknowledged hierarchies either. We need to look at how these systems operate in practice. And because of what’s called automation bias, which is the tendency of human beings to defer to machines once their performance reaches a certain point, yeah, I think we’re looking at a future where machines may be effectively carrying out key cognitive tasks. I’m inclined to agree with Paul that there are some things that it is hard to imagine machines doing well.I’m a little bit less confident in my ability to imagine what machines can do well in the future. If you’d asked me two years ago, five years ago, “Will AIs be able to write good philosophy essays?” I would have said, “That’s 30 years off.”Now I can type all my essay questions into ChatGPT and this thing performs better than many of my students. You know, I’m a little bit less confident that we know what the future looks like here, but I take it that the fundamental technology of these generative AI and adversarial neural networks is actually going to be pretty effective when it comes to at least wargaming. And, actually, the issue for command in the future is how well can we feed machines the data that they need to train themselves up in simulation and apply it to the real world?I worry about how we’ll know these things are reliable enough to move forward, but there’s some pretty powerful dynamics in this area where people may effectively be forced to adopt AI command in response to either what the enemy is doing or what they think the enemy is doing. So, not just the latest technology, there’s a whole set of technologies here, and a whole set of dynamics that I think should undercut our confidence that human beings will always be in charge.HostCan you envision a scenario in which centaur and minotaur warfighting might both have a role, or even work in tandem?SparrowI don’t think it’s all going to be centaurs, but I don’t think it will all be minotaurs. And in some ways, this is a matter of the scale of analysis. If you think about something like Uber, you know, people have this vision of the future of robot taxis. I would get into the robot taxi. And as the human being, I would be in charge of what the machine does. In fact, what we have now is human beings being told by an algorithm where to drive.Even if I were getting into a robot taxi and telling it where to go, for the moment, there’d be a human being in charge of the robot taxi company. And I think at some level, human beings will remain in charge of war as much as human beings are ever in charge of world historical events. But I think for lots of people who are fighting in the future, it will feel as though they’re being ordered around by machines.People will be receiving feeds of various sorts. It will be a very alienating experience, and I think in some contexts they genuinely will be effectively being ordered around by an AI. Interesting things to think about here is how even an autonomous weapons system, which is something that Paul and I have both been concerned about, actually relies on a whole lot of human beings. And so at one level, you hope that a human being is setting the parameters of operations of the autonomous weapons system, but at another level, everyone is just following this thing around and serving its needs. You know, it returns to base and human beings, refuel and maintain it and rearm it.Everyone has to respond to what it does in combat. Even with something like a purportedly autonomous weapons system, zoom out a bit, and what you see as a human is a machine making a core set of warfighting decisions and a whole lot of human beings scurrying around serving the machine. Zoom out more, and you hope that there’s a human being in charge. Now, it depends a little bit on how good real-world wargaming by machines gets, and that’s not something I have a vast amount of access to, how effective AI is in war gaming. Paul may well know more about that. But at that level, if you really had a general officer that was a machine, or even staff taking advice from wargamers from war games then I think most of the military would end up being a minotaur rather than a centaur.ScharreIt’s not just ChatGPT and GPT-4, not just large language models. We have seen, as you pointed out, really amazing progress because a whole set of games—chess, poker, computer games like StarCraft 2 and Dota 2. At human level there is sometimes superhuman performance at these games. What they’re really doing is functions that militaries might think of as situational awareness and command and control.Oftentimes when we think about the use of AI or autonomy in a military context, people tend to think about robotics, which has value because you can take a person out of a platform and then maybe make the platform more maneuverable or faster or more stealthy or smaller or more attritable or something else. In these games, the AI agents have access to the same units as the humans do. The AI playing chess has access to the same chess pieces as the humans do. What’s different is the information processing and decision making. So it’s the command and control that’s different.And it’s not just that these AI systems are better. They actually play differently than humans in a whole variety of ways. And so it points to some of these advantages in a work time context. Obviously, real world is a lot more complicated than a chess or Go board game, and there’s just a lot more possibilities and a lot more clever, nefarious things that an adversary can do in the real world. I think we’re going to continue to see progress. I totally agree with Rob that we really couldn’t say where this is going.I mean, I’ve been working on these issues for a long time. I continue to be surprised. I have been particularly surprised in the last year, 18 – 24 months, with some of the progress. GPT-4 has human-level performance on a whole range of cognitive tasks—the SAT, the GRE, the bar exam. It doesn’t do everything that humans can do, but it’s pretty impressive.You know, I think it’s hard to say where things are going going forward, but I do think a core question that we’re going to grapple with in society, in the military and in other contexts, is what tasks should be done by a human and which ones by a machine? And in some cases, the answer to that will be based simply on which one performs better, and there’s some things where you really just care about accuracy and reliability. And if the machine does a better job, if it’s a safer driver, then we could save lives and maybe we should hand over those tasks to machines once machines get there. But there’s lots of other things, particularly, in the military context that touch on more fundamental ethical issues, and Rob touches on many of these in the paper, where we also want to ask the question, are there certain tasks that only humans should do, not because the machines cannot do them but because they should not do them for some reason?Are there some things that require uniquely human judgment? And why is that? And I think that these are going to be difficult things to grapple with going forward. These metaphors can be helpful. Thinking about is it a centaur? Is the human really up top making decisions? Is it more like a minotaur? This algorithm is making decisions and humans are running around and doing stuff . . . we don’t even know why? Gary Kasparov talked about in a recent wonderful book on chess called Game Changer about AlphaZero, the AI chess playing agent. He talks about how, after he lost to IBM’s deep blue in the 90s, Kasparov created this field of human-machine teaming in chess of free-play chess, or what sometimes been called centaur chess, where this idea of centaur warfighting really comes from. And there was a period of time where the best chess players were human-machine teams.And it was better than having humans playing alone or even chess engines playing by themselves. That is no longer the case. The AI systems are now so good at chess that the human does not add any value in chess. The human just gets in the way. And so, Kasparov describes in this book chess shifting to what he calls a shepherd model, where the human is no longer pairing with the chess agent, but the human is choosing the right tool for the job and shepherding these different AI systems and saying, “Oh, we’re playing chess. I’m going to use this chess engine,” or “I’m going to write poetry. I’m going to use this AI model to do that.” And it’s a different kind of model, but I think it’s helpful to think about these different paradigms and then what are the ones that we want to use? You know, we do have choices about how we use the technology.How should that drive our decision making in terms of how we want to employ this technology for various ends?HostWhat trends do you see in the coming years, and how concerned or confident should we be?SparrowI think we should be very concerned about maintaining human control over these new technologies, not necessarily the kind of super-intelligent AIs going to eat us all questions that some of my colleagues are concerned about, but, in practice, how much are we exercising what we think of as our core human capacities in our daily roles both in civilian life but also in military life? And how much are we just becoming servants of machines? How can we try to shape the powerful dynamics driving in that direction? And that’s the sort of game-theoretic nature of conflict. Or the fact that, at some level, you really want to win a battle or a war makes it especially hard to carve out space for the kind of moral concerns that both Paul and I think should be central to this debate. Because if your strategic adversary just says, “Look, we’re all in for AI command,” and it turns out that that is actually very effective on the battlefield then it’s gonna be hard to say, “Hang on a moment, that’s really dehumanizing, we don’t like just following the orders of machines.” It’s really important to be having this conversation. It needs to happen at a global level—at multiple levels.One thing that hasn’t come up in our conversation is how I think the performance of machines will actually differ in different domains—the performance of robots, in particular. So, something like war in outer space, it’s all going to be robots. Even undersea warfare, that strikes me, at least the command functions are likely to be all onboard computer systems, or again, or undersea. It’s not just about platforms on the sea. But the things that are lurking in the water are probably going to be controlled by computers. What would it be like to be the mechanic on a undersea platform?You know, there’s someone whose job it is to grease the engines and reload the torpedoes, but, actually, all the combat decisions on the submarine are being made by an onboard computer. That would be a really miserable role to be the one or two people in this tin can under the ocean where the onboard computer is choosing what to engage and when. Aerial combat, again, I think probably manned fighters have a limited future. My guess is that the sort of manned aircraft . . . there are probably not too many more generations left of those. But infantry combat . . . I find that really hard to imagine being handed over to robots for a long time because of how difficult the physical environment is.That’s just to say, this story looks a bit different depending upon where you’re thinking about combat taking place. I do think the metaphors matter. I mean, if you’re going to sell AI to highly trained professionals, what you don’t do is say, “Look, here’s a machine that is better than you at your job. It’s going to do all things you love and put you out of work.” No one turns up and says that. Everybody turns up to the conference and says, “Look, I’ve got this great machine, and it’s going to do all the routine work. And you can concentrate on things that you love.” That’s a sales pitch. And I don’t think that we should be taken in by that. You want people to start talking about AI, take it seriously. And if you go to them saying, “Look, this thing’s just going to wipe out your profession,” That’s a pretty short conversation.But if you take seriously the idea that human beings are always going to be in charge, that also forecloses certain conversations that we need to be having. And the other thing here is how these systems reconfigure social and political relations by stealth. I’m sure there are people in the military now who are using ChatGPT or GPT-4 for routine correspondence, which includes things that’s actually quite important. So, even if the bureaucracy said, “Look, no AI.” If people start to rely on it in their daily practice, it’ll seep into the bureaucracy.I mean, in some ways, these systems, they’re technocratic, through and through. And so, they appeal to a certain sort of bureaucracy. And a certain sort of society loves the idea that all we need is good engineers and then all hard choices will be made by machines, and we can absolve ourselves of responsibility. There’s multiple cultural and political dynamics here that we should be paying attention to. And some of them, I suspect, likely to fly beneath the radar, which is why I hope this conversation and others like it will draw people’s attention to this challenge.ScharreOne of the really interesting questions in my mind, and I’d be interested in your thoughts on this, Rob, is how do we balance this tension between efficacy of decision making and where do we want humans to sit in terms of the proper rule? And I think it’s particularly acute in a military context. When I hear the term “minotaur warfighting,” I think, like, oh, that does not sound like a good thing. You talk in your paper about some of the ethical implications, and I come away a little bit like, OK, so is this something that we should be pursuing because we think it’s going to be more effective, or we should be running away from and this is like a warning. Like, hey, if we’re not careful, we’re all gonna turn into these minotaurs and be running around listening to these AI systems. We’re gonna lose control over the things that we should be in charge of. But, of course, there’s this tension of if you’re not effective on the battlefield, you could lose everything.In the wartime context, it’s even more compelling than some business—some business doesn’t use the technology in the right way or it’s not effective or it doesn’t improve the processes, OK. They go out of business. If a country does not invest in their national defense, they could cease to exist as a nation. And so how do we balance some of these needs? Are there some things that we should be keeping in mind as the technology is progressing and we’re sort of looking at these choices of do we use the system in this way or that way to kind of help guide these decisions?Sparrow10 years ago, everyone was going home on autonomy. It was all going to be autonomous. And I started asking people, “Would you be willing to build your next set of submarines with no space for human beings on board? Let’s go for an unmanned submersible fleet.” And a whole lot of people who, on paper, were talking about AI’s output . . . autonomous weapon systems outperforming human beings would really balk at that point.How confident would you have to be to say, “We are going to put all our eggs in the unmanned basket for something like the next generation Strike Fighter or submarines.”? And it turns out I couldn’t get many takers for that, which was really interesting. I mean, I was talking to a community of people who, again, all said, “Look, AI is going to outperform human beings.” I said “OK, so let’s just build these systems. There’s no space for a human being on board.” People started to get really cagey.And de-skilling’s a real issue here because if we start to rely on these things then human beings quickly lose the skills. So you might say, “Let’s move forward with minotaur warfighting. But let’s keep, you know, in the back of our minds that we might have to switch back to the human generals if our adversary’s machines are beating our machines.” Well, I’m not sure human generals will actually maintain the skill set if they don’t get to fight real wars. At another level, I think there’s some questions here about the relationship between what we’re fighting for and how we’re fighting.So, say we end up with minotaur warfighting and we get more and more command decisions, as it were, made by machines. What happens if that starts to move back into our government processes? It could either be explicit—hand over the Supreme Court to the robots. Or it could be, in practice, now everything you see in the media is the result of some algorithm. At one level, I do think we need to take seriously these sorts of concerns about what human beings are doing and what decisions human beings are making because the point of victory will be for human beings to lead their lives. Now, all of that said, any given battle, it’s gonna be hard to avoid the thought that the machines are going to be better than us. And so we should hand over to them in order to win that battle.ScharreYeah, I think this question of adoption is such a really interesting one because, like, we’ve been talking about human agency in these tasks. You know, flying a plane or being an infantry or, you know, a general making decisions. But there also is human agency as this question of do you use a technology in this way? And we could see it in lots of examples of AI technology, today—facial recognition for example. There are many different paradigms for how we’re seeing facial recognition used. For example, it’s used very differently in China today than in the United States. Different regulatory environment. Different societal adoption. That’s a choice that society or the government, whoever the powers that be, have.There’s a question of performance, and that’s always, I think, a challenge that militaries have with any new technology is when is it good enough that you go all in on the adoption, right? When are there airplanes, good enough that you then reorient your naval forces around carrier aviation? And that’s a difficult call to make. And if you go too early, you can make mistakes. If you go too late, you can make mistakes. And I think that’s one challenge.It’ll be interesting, I think, to see how militaries approach these things. My observation has been so far, (that) militaries have moved really slowly. Certainly much, much slower that what we’ve seen out in the civilian sector, where if you look at the rhetoric coming out of the Defense Department, they talk about AI a lot. And if you look at actually doing, it’s not very much. It’s pretty thin, in fact. Former Secretary of Defense Mark Esper, when he was the secretary, he had testified and said that AI was his number one priority. But it’s not. When you look at what the Defense Department is spending money on, it’s not even close. It’s about 1 percent of the DoD budget. So, it’s a pretty tiny fraction. And it’s not even in the top 10 for priorities.So, that, I think, is interesting because it drives choices and, historically, you can see that, particularly with things that are relevant to identity, that becomes a big factor in how militaries adopt a technology, whether it’s cavalry officers looking at the tank or when the Navy was transitioning from sail to steam. That was pushed back because sailors climbed the mast and worked the rigging. They weren’t down in the engine room, turning wrenches. That wasn’t what sailors did. And one of the interesting things to me is how these identities, in some cases, can be so powerful to a military service that they even outlast that task itself. We still call the people on ships sailors. They’re not actually climbing the mast or working the riggings; they’re not actually sailors, but we call them that.And so how militaries adopt these technologies, I think, is very much an open question with a lot of significance both from the military effectiveness standpoint and from an ethical standpoint. One of the things that’s super interesting to me that we are talking about some of these games like AI performance in chess and Go and computer games. And what’s interesting is that I think some of the attributes that are valued in games might be different than what the military values.So, when gaming environments, like in computer games like StarCraft and Dota 2, one of the things computers are very, very good at is operating with greater speed and precision than humans. So they’re very good at what’s termed the microplay—basically, the tactics of maneuvering these little artificial units around on this simulated battlefield. They’re effectively invincible in small unit tactics. So, if you let the AI systems play unconstrained, the AI units can dodge enemy fire. They are basically invincible. You have to dumb the AI systems down, then, to play against humans because when these companies, like Open AI or DeepMind, are training these agents, they’re not training them to do that. That’s actually easy. They’re trying to train them to do the longer term planning that humans are doing and processing information and making higher-level strategic decisions.And so they dumb down the speed at which the AI systems are operating. And you do get some really interesting higher-level strategic decision making from these AI systems. So, for example, in chess and Go, the AI systems have come up with new opening moves, in some cases that humans don’t really fully understand, like, why this is a good tactic? Sometimes they’ll be able to make moves that humans don’t fully understand why they’re valuable until further into the game and they could see, oh, that move had a really important change in the position on the board that turned out to be really valuable. And so, you can imagine militaries viewing these advantages quite differently. That something that was fast, that’s the kind of thing that militaries could see value in. OK, it’s got quick reaction times. Something that has higher precision they could see value in.Something where it’s gonna do something spooky and weird, and I don’t really understand why it’s doing it, but in the long run it’ll be valuable, I could see militaries not be excited about at all . . . and really hesitant. These are really interesting questions that militaries are going to have to grapple with and that have all of these important strategic and ethical implications going forward.HostDo you have any final thoughts you’d like to share before we go?SparrowI kind of think that people will be really quick to adopt technologies that save their lives, for instance. Situational awareness/threat assessment. I think that is going to be adopted quite quickly. Targeting systems, I think will be adopted. We can take out an enemy weapon or platform more quickly because we’ve handed over targeting to an AI—I think that stuff will be adopted quite quickly. I think it’s gonna depend where in the institution one is. I’m a big fan of looking at people’s incentive structures. You know, take seriously what people say, but you should always keep in the back of the mind, what would someone like you say?This is a very hard space to be confident in, but I just encourage people not to just talk to the people like them but to take seriously what people lower down the hierarchy think. How they’re experiencing things. That question that Paul raised about do you go early in the hope of getting a decisive advantage or do you go late because you want to be conservative, those are sensible thoughts. As Paul said, it’s still quite early days for military AI. People should be, as they are, paying close attention to what’s happening in Ukraine at the moment, where, as I understand it, there is some targeting now being done by algorithms, and keep talking about it.HostPaul, last word to you, sir.ScharreThank you, Stephanie and Rob for a great conversation, and, Rob, for just a really interesting and thoughtful paper . . . and really provocative. I think the issues that we’re talking about are just really going to be difficult ones for the defense community to struggle with going forward in terms of what are the tasks that should be done by humans versus machines. I do think there’s a lot of really challenging ethical issues.Oftentimes, ethical issues end up getting kind of short shrift because it’s like, well, who cares if we’re going to be minotaurs as long as it works? I think it’s worth pointing out that some of these issues get to the core of professional ethics. The context for war is a particular one, and we have rules for conduct and war (the law of war) that kind of write down what we think appropriate behavior is. But there are also interesting questions about military professional ethics of, like, you know, decisions about the use of force, for example, are the essence of the military profession. What are those things that we want military professionals to be in charge of . . . that we want them to be responsible for? You know, some of the most conservative people I’ve ever spoken to in these issues of autonomy are the military professionals themselves, who don’t want to give up the tasks that they’re doing. And sometimes I think for reasons that are good and make sense, and sometimes, for reasons that I think are a little bit stubborn and pigheaded.SparrowPaul and Stephanie, I know you said last word to Paul, so I wanted to interrupt now rather than at the end. I think it’s worth asking, why would someone join the military in the future? Part of the problem here is a recruitment problem. If you say, “You’re going to be fodder for the machines,” why would people line up for that?You know, that question about military culture is absolutely spot on, but it matters to the effectiveness of the force, as well, because you can’t get people to take on the role. And the other thing is the decision to start a war, I mean, or even to start a conflict, for instance. That’s something that we shouldn’t hand over to the machines, but the same logic that is driving towards battlefield command is driving towards making decisions about first strikes, for instance. And that’s one thing we should resist is that some AI system says now’s the time to strike. For me, that’s a hard line. You don’t start a war on the basis of the choice of the machine. So just some examples, I think, to illustrate the points that Paul was making.Sorry, Paul.ScharreNot at all. All good points. I think these are gonna be the challenging questions going forward, and I think there’s going to be difficult issues ahead to grapple with when we think about how to employ these technologies in a way that’s effective that keep humans in charge and responsible for these kinds of decisions in war.HostThank you both so much.SparrowThanks, Stephanie. And thank you, Paul.ScharreThank you both. Really enjoyed the discussion.HostListeners, you can find the genesisarticle@press.armywarcollege.edu/parameters look for volume 53, issue 1. If you enjoyed this episode of Decisive Point and would like to hear more, you can find us on any major podcast platform.About the authorsPaul Scharre is the executive vice president and director of studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. His first book, Army of None: Autonomous Weapons and the Future of War, won the 2019 Colby Award, was named one of Bill Gates’ top five books of 2018, and was named by The Economist one of the top five books to understand modern warfare. Scharre previously worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He led the Department of Defense (DoD) working group that drafted DoD Directive 3000.09, establishing the department’s policies on autonomy in weapon systems. He also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance programs and directed energy technologies. Scharre was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and secretary-level planning guidance.Robert J. Sparrow is a professor in the philosophy program and an associate investigator in the Australian Research Council Centre of Excellence for Automated Decision-making and Society (CE200100005) at Monash University, Australia, where he works on ethical issues raised by new technologies. He has served as a cochair of the Institute of Electrical and Electronics Engineers Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app