Conversations on Strategy Podcast cover image

Conversations on Strategy Podcast

Latest episodes

undefined
Nov 20, 2023 • 0sec

Conversations on Strategy Podcast – Ep 26 – Christopher J. Bolan, Jerad I. Harper, and Joel R. Hillison – Revisiting “Diverging Interests: US Strategy in the Middle East”

The October 2023 attacks on Israel by Hamas are only the latest in a series of global crises with implications for the regional order in the Middle East. These changes and the diverging interests of actors in the region have implications for US strategy and provide an opportunity to rethink key US relationships there. Read the original article here: https://press.armywarcollege.edu/parameters/vol50/iss4/10/ Download the full episode transcript here: https://media.defense.gov/2023/Nov/21/2003345028/-1/-1/0/COS-26-TRANSCRIPT-BOLAN-HARPER-HILLISON.PDF Keywords: Israel, Hamas, Middle East, Iran, Turkey
undefined
Sep 28, 2023 • 0sec

Conversations on Strategy Podcast – Ep 24 – Jonathan Klug and Mick Ryan – On White Sun War: The Campaign for Taiwan.mp3

In this podcast, US Army Col. Jon Klug and retired Australian Major General Mick Ryan discuss Ryan’s most recent book, White Sun War: The Campaign for Taiwan, and its potential implications for future warfare. In the summer of 1986, Tom Clancy’s novel Red Storm Rising debuted at number one on the New York Times bestseller list as it brought to life World War III, although a nonnuclear version. Similarly, Retired Australian Major General Mick Ryan’s new novel White Sun War offers a realistic and gripping “historical” account of a war for Taiwan set in 2028. Where Clancy had the Warsaw Pact and NATO, Ryan pits communist China against a coalition of Taiwan, the United States, Australia, Japan, and others. A longtime strategic commentator with 35 years of real-world experience, Ryan’s vision of a war in the near future is firmly grounded. He deftly uses fiction to explore the potential challenges of warfare and leadership in 2028.The book, On White Sun War: The Campaign for Taiwan, is available here: https://www.casematepublishers.com/9781636242507/white-sun-war/Read the transcript: https://media.defense.gov/2023/Oct/18/2003322446/-1/-1/0/CoS-Transcript-Ep24-Klug-Ryan-White-Sun-War-Campaign-for-Taiwan.PDFKeywords: China, Taiwan, NATO, Australia, Japan, Warsaw Pact
undefined
Sep 28, 2023 • 0sec

Conversations on Strategy Podcast – Ep 25 – Dr. Allison Abbe and Dr. Claire Yorke – On Strategic Empathy

This podcast explores the benefits of strategic empathy and its value as a leadership tool. Read the issue here: https://press.armywarcollege.edu/parameters/vol53/iss2/9/Download the transcript: https://media.defense.gov/2023/Oct/10/2003316776/-1/-1/0/COS-PODCAST-TRANSCRIPT-ABBE-YORKE-FINAL.PDFKeywords: strategic empathy, perspective taking, H. R. McMaster, Ralph K. White, Zachary ShoreCONVERSATIONS ON STRATEGY PODCAST – EPISODE TRANSCRIPTDr. Allison Abbe and Dr. Claire Yorke On Strategic Empathy*****Stephanie Crider (Host)You're listening to Conversations on Strategy (http://ssi.armywarcollege.edu/cos). The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.Today I'm talking with Dr. Allison Abbe and Dr. Claire Yorke.Abbe is a professor of organizational studies at the US Army War College and author of “Understanding the Adversary: Strategic Empathy and Perspective Taking in National Security,” (https://press.armywarcollege.edu/parameters/vol53/iss2/9/) which was published in the summer of 2023 issue of Parameters.Yorke is an author, academic, researcher, and advisor. Her expertise is in the role of empathy and emotions in international affairs, politics, leadership, and society.Episode TranscriptDownload the transcript: https://media.defense.gov/2023/Oct/10/2003316776/-1/-1/0/COS-PODCAST-TRANSCRIPT-ABBE-YORKE-FINAL.PDFWelcome to Conversations on Strategy.What is strategic empathy and what is it not?Claire YorkeSo, strategic empathy emphasizes the importance of understanding the other side within strategic decision making, and this might be an adversary. And often a lot of the scholarship focuses on adversaries, but it can also be allies. It can be societies. It's a way of having a deeper awareness of how different people view the world and how that will have a bearing on the calculations and decisions that you make on the implications of strategy when it touches ground, when it reaches contact point with the natural situation in a context, and it encourages us to think more about the context that different people come from, their experiences, their histories, their socioeconomic backgrounds, their perspectives of the world, their cultural context, and also the meanings that they give to different ideas to different values, to different elements of a situation.Allison AbbeAs Claire describes, it's taking the perspective of another party, looking at the situation in their shoes, as they would say, “walking in their shoes.” But it is important to distinguish this from just care and compassion. I think sometimes when people hear the term empathy, they automatically think that it just means compassion for someone else and sympathy for another person. And it's much more than that. It's much more of those cognitive elements of taking their perspective and understanding their context as Claire said.YorkeIt's that idea of having a broader range of emotions and being aware of them. It's not a weakness to have strategic empathy. It's an essential element of dealing with other human beings.AbbeAnd I think that's really one of the keys is that it's purposeful. It's not just thinking about someone else's point of view for its own sake. It's really using that perspective and using their lenses to understand and better make decisions and to better include them in your calculations and your decision making.HostWhat can strategic empathy add to our strategic thinking?YorkeIt can add a number of different elements. Firstly, I think it's a critical asset in creating more strategic humility and understanding that there is not just one world view that dominates and that is integrally, right, intrinsically right. Cultivating that around ourself, there are multiple different competing realities right now as people see them. How do we build that into our approach and our sense of self and our identity? And so, it compels self-reflection in how we make decisions, in how we think about the world and understand, as well, how we are experienced by others. How do our words, how do our actions, how our behaviors have implications and often unintended implications? Maybe our actions are intended to be good, but they're operating against a background where they won't be interpreted in that way. And so, it gives that checking point. It gives a sense of reflection and humility and greater consideration to how we interact.AbbeAnd I think that's exactly right. You know, the military perspective, where I'm coming from and teaching strategic leaders as officers that will be working for combat and commands and in other areas, it's critical to military planning that they consider the impact of their actions and how that will be perceived. Those actions are not going taken... as they plan, sometimes there are unexpected and unintended consequences that the other party, whether that's the local population or the adversary, will not necessarily receive those actions the way they might be intended, and there is huge room for miscalculation when you're talking about understanding the adversary’s perspective. You can go awry in many ways when you're trying to understand the adversary perspective, if you're not taking those different lenses into account.YorkeIt also can contribute an awareness of the need to ask more people to think about who's missing from the table, who's missing from our consideration. Who have we not understood? Who have we not engaged with? And so, especially from that military perspective, how do militaries build in greater engagement with different communities in a respectful, engaged way that doesn't undermine them, that is engaging with them properly in a very considered way to make sure that you have taken in various sources of information, various perspectives, into account? And that should give you a more holistic, a richer picture of the environment you're operating, of the ways in which power may look very different on the ground to how we conceptualize it in theory or in abstract or from remote capitals, and so, fostering greater nuance and complexity within the process.HostWhat limitations does strategic empathy have?AbbeOne potential limitation is the way that empathy in general has often been discussed. It's in terms of taking on the point of view of someone else. And there is some risk in that if you don't then shift back to your own perspective, your own party’s perspective. That's sometimes talked about as going native. I think that's sort of an extreme example of what we're talking about here with the risk of empathy, but the important thing, and what I've talked about in the paper, is that it's perspective taking rather than the broader empathy concept where you're taking on the perspective of another party but then moving back to your own. And so being able to shift among those perspectives is really critical, and I think empathy can be misapplied when it's just a matter of taking on the perspective of the other party and adopting that as your own, then you're not making those distinctions and being able to shift in and out of the lenses that will be important to decision making.YorkeOne of the limitations can be that, especially when you're dealing with a military environment where decisions are often having to be made very quickly in very intense situations, you cannot be constantly processing various different perspectives. Someone at some point has to make a call, and it may be that they get that call wrong. But that is why this emphasis on strategic empathy becomes so important because, in theory, you should have all your information—as much information as possible at the outset—when you're designing the approach, when you're thinking about the strategic calculations involved, so that then when you get to the very intense critical moments, you feel equipped and able to then process what you have to do with as much information and insight as possible.AbbeEmpathy definitely is demanding on your cognitive resources, and you can't always apply it when you're in a time limited or very stressful environment. When those conditions are in place, you really fall back to defaulting to your own perspective. And so there may not be time, unless you've taken those perspectives into account in an earlier planning phase, so that when conditions change, you're ready to apply those perspectives. So, it can be limiting in that you don't always have time to engage in that cognitively demanding process when you need to.YorkeAnd this is a great point, as well, because it emphasizes that one of the limitations of empathy is exactly that it can cause burn out. Having more conversations among practitioners, among policymakers, among military officials, and people involved in the military means that you develop a greater emotional literacy around what it means, what it costs, what it requires. So, then you are more aware of when you reach that overload, when maybe the people who are serving can't keep on trying to understand other perspectives or what are the signs when maybe you've reached that burnout point. And so, being aware of that, that that can be too much. And creating greater literacy is one of the key elements we have to be working on and increasing within strategic thinking.HostWhat is the state of scholarship on strategic empathy now?AbbeI think that scholarship on empathy has really waxed and waned and gone through cycles. Ralph White wrote about realistic empathy decades ago, now. This is essentially a very similar concept of strategic empathy, but then it lost traction in the 90s and disappeared for a while. In the US militar, there was some discussion about empathy and perspective taking early in the Iraq and Afghanistan engagements, but there was a loss of interest, I think, as we've moved to large-scale combat operations and focusing on that. So, I think that it’s revived again, potentially, at least, in the history community from H. R. McMaster and Zachary Shore’s work, but we'll see.It seems like it gets attention for some period and then drops off again and so has not been making huge advances. Although, if you look within specific disciplines, like in psychology, I think there has been some incremental progress in understanding at least how to measure empathy and perspective taking and when in the life-cycle development you start to see those skills emerge. So, there has been progress there.YorkeI share your view. It definitely goes through cycles. And Ralph K. White was one of the earliest people to be talking about this in this space, in the context of conflicts such as Vietnam, and also Iraq. I find it really interesting to see how it's being talked about, especially in the context of things like strategic communications. How is empathy apart of engaging with diverse actors and different audiences within a very complex, constantly moving communications environment?So, we do see some there, as Allison said. In psychology, it's really gaining traction, but often people don't like to use the word empathy because of the connotations that it has as being maybe something a little bit soft, of being all about feelings and compassion, and being a sign of weakness to even countenance another perspective and another point of view. And that's actually exactly what you've got to do, whether you're in the military, whether you're designing foreign policy or domestic policy, and it can be quite an academic intellectual exercise. You are not maybe caring for all the people you're trying to understand in the same way. Some people may be actively hostile to you and you to them, but that is a process we have to get better at talking about. And I think that's in the scholarship I'm seeing and how people use different words, which have a very similar connotation. Often, we can find it hiding, but not using the same, maybe, reference points and definitions.AbbeThat's a question I have for you, Claire. Do you think it's important that we use the term empathy? Because I did find in some of my own experiences, at the 2008 time frame, that the term perspective taking or frame switching tended to be more well received than talking about getting soldiers and officers to learn empathy so that they could be more cross culturally competent. What do you think?YorkeI am of two minds, partly because what we need is more that people practice and are conscious of the importance of this thing that we're calling empathy. Whether you call it perspective taking or understanding an adversary, it's critical that that is something we start to be better at and do more of. I personally have a preference for using the term empathy precisely because I want to try and challenge this dichotomy we have between emotions and reason and the idea that, somehow, reason is the right way to do things, and emotions are irrational and should be dismissed from our calculation.There's so much fascinating research in neuroscience and psychology and other disciplines that show us that reason and motion are intricately interlinked, and they're entwined in how we make judgments. We can't make judgments without understanding how different people feel about situations, how emotions move people, how emotions give meaning to what we value—to what we are willing to risk—by using empathy.For me, it's a way of saying let's get better about talking about feelings, not as something soft and irrational but as sources of judgment and insight. And that, then, can help us have a far broader picture from which to make decision making that is reasoned but informed by judicious understanding of emotion, emotion intelligence, effectively.HostWhat is still missing when it comes to strategic empathy?AbbeOne thing that's missing, as we referenced in talking about the scholarship on strategic empathy, is a consistent focus on including empathy in professional development and understanding it as another skill set where we talk about critical thinking, systems thinking—empathy should be part of that. And I think we have some attention to it now, again, but (we should) be concerned of it dropping off again and disappearing from conversations in professional military education in particular. I think our understanding of how it improves decision making and planning could be better defined. I don't think there's been as much progress there in really showing what difference it makes in planning and decision making to include empathy and understanding other perspectives as compared with when decision makers omit those perspectives.YorkeAs a non-American, I find a lot of the scholarship is quite American focused, as a lot of work by people like Zachary Shore and H. R. McMaster has already been mentioned. And I think there's a huge range of work to be explored that says, what does strategic empathy mean from very different perspectives? Do Europeans do it in the same way? Do the Chinese do strategic empathy in the same way? What about in South Africa or Australia or Nigeria? How do these different countries maybe conceptualize empathy differently? What does that give us as an insight into strategic thinking, into how different people approach adversaries and allies alike, especially when we look at the threat and risk landscape right now and we're dealing with a range of different challenges in the future. Not only conventional military conflict, but also technology. There's going to be challenges with resilience and climate change, among other things.How can we extend our strategic empathy to audiences and people who maybe have not traditionally been included? So, when we talk about climate change, how do we use strategic empathy as a beneficial way to do more effective diplomacy that brings in Pacific Island nations, small island states? How do we use it as a way to have a greater, more constructive dialogue that takes accounts of different people's needs and interests and values and priorities? And I think that's something that we really need to do.AbbeThe recent literature on strategic empathy has really focused on understanding the adversary and, to Claire's point, there needs to be more focus on understanding a broader range of perspectives and actors and, in particular, for the US, and our allies and partners. It's understanding the perspectives within the Allied nations and partner nations to better improve interoperability. We talk about technical, procedural, and human interoperability, and I think empathy can really add to understanding human interoperability at different levels; cultural, interpersonal empathy could be really strong component of that.HostAllison. Claire. Thanks for making time to speak with me today. I really enjoyed it.AbbeThank you, Stephanie. Thank you, Claire.YorkeThank you. It's a real pleasure to join you both.HostIf you enjoyed this episode and would like to hear more, you can find us on any major podcast platform.Additional Resources:NSS Week noontime lecture: Dr. Allison Abbe discusses Strategic Empathy (https://www.youtube.com/watch?v=8HFCYDTO4F4)Articles by Claire Yorke on empathy and strategy: Is Empathy a Strategic Imperative? A Review Essay (https://www.tandfonline.com/doi/full/10.108001402390.2022.2152800) and The Significance and Limitations of Empathy in Strategic Communications. (https://stratcomcoe.org/publications/the-significance-and-limitations-of-empathy-in-strategic-communications/191/
undefined
Aug 15, 2023 • 0sec

Conversations on Strategy Podcast – Ep 23 – Anthony Pfaff and Adam Henschke – The Ethics of Trusting AI

Based on the monograph Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge and the Parameters article “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” this episode focuses on the ethics of trusting AI. Who is responsible when something goes wrong? When is it okay for AI to make command decisions? How can humans and machines work together to form more effective teams? These questions and more are explored in this podcast.Read the articles:Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge: https://press.armywarcollege.edu/monographs/959/Parameters article “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming”: https://press.armywarcollege.edu/parameters/vol53/iss1/14Download the full episode transcript here: https://media.defense.gov/2023/Nov/15/2003341255/-1/-1/0/COS-23-PODCAST-TRANSCRIPT-PFAFF_HENSCHKE_TRUSTING-AI.PDFKeywords: artificial intelligence (AI), manned-unmanned teaming, ethical AI, civil-military relations, autonomous weapons systems
undefined
Jun 28, 2023 • 0sec

Conversations on Strategy Podcast – Ep 22 – Paul Scharre and Robert J. Sparrow – AI: Centaurs Versus Minotaurs—Who Is in Charge?

Who is in charge when it comes to AI? People or machines? In this episode, Paul Scharre, author of the books Army of None: Autonomous Weapons and the Future of War and the award-winning Four Battlegrounds: Power in the Age of Artificial Intelligence, and Robert Sparrow, coauthor with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming” that was featured in the Spring 2023 issue of Parameters, discuss AI and its future military implications.Read the article: https://press.armywarcollege.edu/parameters/vol53/iss1/14/Keywords: artificial intelligence (AI), data science, lethal targeting, professional expert knowledge, talent management, ethical AI, civil-military relationsEpisode transcript: AI: Centaurs Versus Minotaurs: Who Is in Charge?Stephanie Crider (Host)The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.You’re listening to Conversations on Strategy.I’m talking with Paul Scharre and Professor Rob Sparrow today. Scharre is the author of Army of None: Autonomous Weapons in the Future of War, and Four Battlegrounds: Power in the Age of Artificial Intelligence. He’s the vice president and director of studies at the Center for a New American Security.Sparrow is co-author with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” which was featured in the Spring 2023 issue of Parameters. Sparrow is a professor in the philosophy program at Monash University, Australia, where he works on ethical issues raised by new technologies.Welcome to Conversations on Strategy. Thanks for being here today.Paul ScharreAbsolutely. Thank you.HostPaul, you talk about centaur warfighting in your work. Rob and Adam re-envisioned that model in their article. What exactly is centaur warfighting?ScharreWell, thanks for asking, and I’m very excited to join this conversation with you and with Rob on this topic. The idea really is that as we see increased capabilities in artificial intelligence and autonomous systems that rather than thinking about machines operating on their own that we should be thinking about humans and machines as part of a joint cognitive system working together. And the metaphor here is the idea of a centaur, the mythical creature of a 1/2 human 1/2 horse, with the human on top—the head and the torso of a human and then the body of a horse. You know, there’s, like, a helpful metaphor to think about combining humans and machines working to solve problems using the best of both human and machine intelligence. That’s the goal.HostRob, you see AI being used differently. What’s your perspective on this topic?Robert SparrowSo, I think it’s absolutely right to be talking about human-machine or manned-unmanned teaming. I do think that we will see teams of artificial intelligence as robots and human beings working and fighting together in the future. I’m less confident that the human being will always be in charge. And I think the image of the ccentaur is kind of reassuring to people working in the military because it says, “Look, you’ll get to do the things that you love and think are most important. You’ll get to be in charge, and you’ll get the robots to do the grunt work.” And, actually, when we look at how human beings and machines collaborate in civilian life, we actually often find it’s the other way around.(It) turns out that machines are quite good at planning and calculating and cognitive skills. They’re very weak at interactions with the physical world. Nowadays, if you, say, ask ChatGPT to write you a set of orders to deploy troops it can probably do a passable job at that just by cannibalizing existing texts online. But if you want a machine to go over there and empty that wastepaper basket, the robot simply can’t do it. So, I think the future of manned-unmanned teaming might actually be computers, with AI systems issuing orders. Or maybe advice that has the moral force of orders are two teams of human beings.Adam and I have proffered the image of the Minotaur, which was the mythical creature with the head of a bull and the body of a man as an alternative to the centaur, when we’re thinking about the future of manned-unmanned teaming.HostPaul, do you care to respond?ScharreI think it’s a great paper and I would encourage people to check it out, “Minotaurs, Not Centaurs.” And it’s a really compelling image. Maybe the humans aren’t on top. Maybe the humans are on the bottom, and we have this other creature that’s making the decisions, and we’re just the body taking the actions. (It’s) kind of creepy, the idea of maybe we’re headed towards this role of minotaurs instead, and we’re just doing the bidding of the machines.You know, a few years ago, I think a lot of people envisioned the type of tasks that AI would be offloading, would be low-skill tasks, particularly for physical labor. So, a lot of the concern was like autonomousness was gonna put truck drivers out of work. It turns out, maneuvering things in the physical world is really hard for machines. And, in fact, we’ve seen with progress in large language models in just the last few years, ChatGPT or the newest version (GPT-4), that they’re quite good at lower-level skills of cognitive labor so that they can do a lot of the tasks that maybe an intern might do in a white-collar job environment, and they’re passable. And as he’s pointing out, ask a robot to throw out a trash basket for you or to make a pot of coffee . . . it’s not any good at doing that. But if you said, “Hey, write a short essay about human-machine teaming in the military environment,” it’s not that bad. And that’s pretty wild.And I think sometimes these models have been criticized . . . people say, “Well, they’re just sort of like shuffling words around.” It’s not. It’s doing more than that. Some of the outputs are just garbage, but (with) some of them, it’s clear that the model does understand, to some extent. It’s always dicey using anthropomorphic terms, but (it can) understand the prompts that you’re giving it, what you’re asking it to do, and can generate output that’s useful. And sometimes it’s vague, but so are people sometimes. And I think that this vision of hey, are we headed towards this world of a minotaur kind of teaming environment is a good concern to raise because presumably that’ not what we want.So then how do we ensure that humans are in charge of the kinds of decisions that we want humans to be responsible for? How do we be intentional about using AI and autonomy, particularly in the military environment?SparrowI would resist the implication that it’s only really ChatGPT that we should be looking at. I mean, in some ways it’s the history of chess or gaming where we should be looking to the fact that machines outperform all, or at least most, human beings. And the question is if you could develop a warfighting machine for command functions then that wouldn’t necessarily have to be able to write nice sentences. The question is when it comes to some of the functions of battlefield command, whether or not machines can outperform human beings in that role. There’s kind of some applications like threat assessment in aerial warfare, for instance, where the tempo of battle is sufficiently high and there’s lots of things whizzing around in the sky, and we’re already at a point where human beings are relying on machines to at least prioritize tasks for them. And I think, increasingly, it will be a brave human being that overrides the machine and says, “The machine has got this wrong.”We don’t need to be looking at explicit hierarchies or acknowledged hierarchies either. We need to look at how these systems operate in practice. And because of what’s called automation bias, which is the tendency of human beings to defer to machines once their performance reaches a certain point, yeah, I think we’re looking at a future where machines may be effectively carrying out key cognitive tasks. I’m inclined to agree with Paul that there are some things that it is hard to imagine machines doing well.I’m a little bit less confident in my ability to imagine what machines can do well in the future. If you’d asked me two years ago, five years ago, “Will AIs be able to write good philosophy essays?” I would have said, “That’s 30 years off.”Now I can type all my essay questions into ChatGPT and this thing performs better than many of my students. You know, I’m a little bit less confident that we know what the future looks like here, but I take it that the fundamental technology of these generative AI and adversarial neural networks is actually going to be pretty effective when it comes to at least wargaming. And, actually, the issue for command in the future is how well can we feed machines the data that they need to train themselves up in simulation and apply it to the real world?I worry about how we’ll know these things are reliable enough to move forward, but there’s some pretty powerful dynamics in this area where people may effectively be forced to adopt AI command in response to either what the enemy is doing or what they think the enemy is doing. So, not just the latest technology, there’s a whole set of technologies here, and a whole set of dynamics that I think should undercut our confidence that human beings will always be in charge.HostCan you envision a scenario in which centaur and minotaur warfighting might both have a role, or even work in tandem?SparrowI don’t think it’s all going to be centaurs, but I don’t think it will all be minotaurs. And in some ways, this is a matter of the scale of analysis. If you think about something like Uber, you know, people have this vision of the future of robot taxis. I would get into the robot taxi. And as the human being, I would be in charge of what the machine does. In fact, what we have now is human beings being told by an algorithm where to drive.Even if I were getting into a robot taxi and telling it where to go, for the moment, there’d be a human being in charge of the robot taxi company. And I think at some level, human beings will remain in charge of war as much as human beings are ever in charge of world historical events. But I think for lots of people who are fighting in the future, it will feel as though they’re being ordered around by machines.People will be receiving feeds of various sorts. It will be a very alienating experience, and I think in some contexts they genuinely will be effectively being ordered around by an AI. Interesting things to think about here is how even an autonomous weapons system, which is something that Paul and I have both been concerned about, actually relies on a whole lot of human beings. And so at one level, you hope that a human being is setting the parameters of operations of the autonomous weapons system, but at another level, everyone is just following this thing around and serving its needs. You know, it returns to base and human beings, refuel and maintain it and rearm it.Everyone has to respond to what it does in combat. Even with something like a purportedly autonomous weapons system, zoom out a bit, and what you see as a human is a machine making a core set of warfighting decisions and a whole lot of human beings scurrying around serving the machine. Zoom out more, and you hope that there’s a human being in charge. Now, it depends a little bit on how good real-world wargaming by machines gets, and that’s not something I have a vast amount of access to, how effective AI is in war gaming. Paul may well know more about that. But at that level, if you really had a general officer that was a machine, or even staff taking advice from wargamers from war games then I think most of the military would end up being a minotaur rather than a centaur.ScharreIt’s not just ChatGPT and GPT-4, not just large language models. We have seen, as you pointed out, really amazing progress because a whole set of games—chess, poker, computer games like StarCraft 2 and Dota 2. At human level there is sometimes superhuman performance at these games. What they’re really doing is functions that militaries might think of as situational awareness and command and control.Oftentimes when we think about the use of AI or autonomy in a military context, people tend to think about robotics, which has value because you can take a person out of a platform and then maybe make the platform more maneuverable or faster or more stealthy or smaller or more attritable or something else. In these games, the AI agents have access to the same units as the humans do. The AI playing chess has access to the same chess pieces as the humans do. What’s different is the information processing and decision making. So it’s the command and control that’s different.And it’s not just that these AI systems are better. They actually play differently than humans in a whole variety of ways. And so it points to some of these advantages in a work time context. Obviously, real world is a lot more complicated than a chess or Go board game, and there’s just a lot more possibilities and a lot more clever, nefarious things that an adversary can do in the real world. I think we’re going to continue to see progress. I totally agree with Rob that we really couldn’t say where this is going.I mean, I’ve been working on these issues for a long time. I continue to be surprised. I have been particularly surprised in the last year, 18 – 24 months, with some of the progress. GPT-4 has human-level performance on a whole range of cognitive tasks—the SAT, the GRE, the bar exam. It doesn’t do everything that humans can do, but it’s pretty impressive.You know, I think it’s hard to say where things are going going forward, but I do think a core question that we’re going to grapple with in society, in the military and in other contexts, is what tasks should be done by a human and which ones by a machine? And in some cases, the answer to that will be based simply on which one performs better, and there’s some things where you really just care about accuracy and reliability. And if the machine does a better job, if it’s a safer driver, then we could save lives and maybe we should hand over those tasks to machines once machines get there. But there’s lots of other things, particularly, in the military context that touch on more fundamental ethical issues, and Rob touches on many of these in the paper, where we also want to ask the question, are there certain tasks that only humans should do, not because the machines cannot do them but because they should not do them for some reason?Are there some things that require uniquely human judgment? And why is that? And I think that these are going to be difficult things to grapple with going forward. These metaphors can be helpful. Thinking about is it a centaur? Is the human really up top making decisions? Is it more like a minotaur? This algorithm is making decisions and humans are running around and doing stuff . . . we don’t even know why? Gary Kasparov talked about in a recent wonderful book on chess called Game Changer about AlphaZero, the AI chess playing agent. He talks about how, after he lost to IBM’s deep blue in the 90s, Kasparov created this field of human-machine teaming in chess of free-play chess, or what sometimes been called centaur chess, where this idea of centaur warfighting really comes from. And there was a period of time where the best chess players were human-machine teams.And it was better than having humans playing alone or even chess engines playing by themselves. That is no longer the case. The AI systems are now so good at chess that the human does not add any value in chess. The human just gets in the way. And so, Kasparov describes in this book chess shifting to what he calls a shepherd model, where the human is no longer pairing with the chess agent, but the human is choosing the right tool for the job and shepherding these different AI systems and saying, “Oh, we’re playing chess. I’m going to use this chess engine,” or “I’m going to write poetry. I’m going to use this AI model to do that.” And it’s a different kind of model, but I think it’s helpful to think about these different paradigms and then what are the ones that we want to use? You know, we do have choices about how we use the technology.How should that drive our decision making in terms of how we want to employ this technology for various ends?HostWhat trends do you see in the coming years, and how concerned or confident should we be?SparrowI think we should be very concerned about maintaining human control over these new technologies, not necessarily the kind of super-intelligent AIs going to eat us all questions that some of my colleagues are concerned about, but, in practice, how much are we exercising what we think of as our core human capacities in our daily roles both in civilian life but also in military life? And how much are we just becoming servants of machines? How can we try to shape the powerful dynamics driving in that direction? And that’s the sort of game-theoretic nature of conflict. Or the fact that, at some level, you really want to win a battle or a war makes it especially hard to carve out space for the kind of moral concerns that both Paul and I think should be central to this debate. Because if your strategic adversary just says, “Look, we’re all in for AI command,” and it turns out that that is actually very effective on the battlefield then it’s gonna be hard to say, “Hang on a moment, that’s really dehumanizing, we don’t like just following the orders of machines.” It’s really important to be having this conversation. It needs to happen at a global level—at multiple levels.One thing that hasn’t come up in our conversation is how I think the performance of machines will actually differ in different domains—the performance of robots, in particular. So, something like war in outer space, it’s all going to be robots. Even undersea warfare, that strikes me, at least the command functions are likely to be all onboard computer systems, or again, or undersea. It’s not just about platforms on the sea. But the things that are lurking in the water are probably going to be controlled by computers. What would it be like to be the mechanic on a undersea platform?You know, there’s someone whose job it is to grease the engines and reload the torpedoes, but, actually, all the combat decisions on the submarine are being made by an onboard computer. That would be a really miserable role to be the one or two people in this tin can under the ocean where the onboard computer is choosing what to engage and when. Aerial combat, again, I think probably manned fighters have a limited future. My guess is that the sort of manned aircraft . . . there are probably not too many more generations left of those. But infantry combat . . . I find that really hard to imagine being handed over to robots for a long time because of how difficult the physical environment is.That’s just to say, this story looks a bit different depending upon where you’re thinking about combat taking place. I do think the metaphors matter. I mean, if you’re going to sell AI to highly trained professionals, what you don’t do is say, “Look, here’s a machine that is better than you at your job. It’s going to do all things you love and put you out of work.” No one turns up and says that. Everybody turns up to the conference and says, “Look, I’ve got this great machine, and it’s going to do all the routine work. And you can concentrate on things that you love.” That’s a sales pitch. And I don’t think that we should be taken in by that. You want people to start talking about AI, take it seriously. And if you go to them saying, “Look, this thing’s just going to wipe out your profession,” That’s a pretty short conversation.But if you take seriously the idea that human beings are always going to be in charge, that also forecloses certain conversations that we need to be having. And the other thing here is how these systems reconfigure social and political relations by stealth. I’m sure there are people in the military now who are using ChatGPT or GPT-4 for routine correspondence, which includes things that’s actually quite important. So, even if the bureaucracy said, “Look, no AI.” If people start to rely on it in their daily practice, it’ll seep into the bureaucracy.I mean, in some ways, these systems, they’re technocratic, through and through. And so, they appeal to a certain sort of bureaucracy. And a certain sort of society loves the idea that all we need is good engineers and then all hard choices will be made by machines, and we can absolve ourselves of responsibility. There’s multiple cultural and political dynamics here that we should be paying attention to. And some of them, I suspect, likely to fly beneath the radar, which is why I hope this conversation and others like it will draw people’s attention to this challenge.ScharreOne of the really interesting questions in my mind, and I’d be interested in your thoughts on this, Rob, is how do we balance this tension between efficacy of decision making and where do we want humans to sit in terms of the proper rule? And I think it’s particularly acute in a military context. When I hear the term “minotaur warfighting,” I think, like, oh, that does not sound like a good thing. You talk in your paper about some of the ethical implications, and I come away a little bit like, OK, so is this something that we should be pursuing because we think it’s going to be more effective, or we should be running away from and this is like a warning. Like, hey, if we’re not careful, we’re all gonna turn into these minotaurs and be running around listening to these AI systems. We’re gonna lose control over the things that we should be in charge of. But, of course, there’s this tension of if you’re not effective on the battlefield, you could lose everything.In the wartime context, it’s even more compelling than some business—some business doesn’t use the technology in the right way or it’s not effective or it doesn’t improve the processes, OK. They go out of business. If a country does not invest in their national defense, they could cease to exist as a nation. And so how do we balance some of these needs? Are there some things that we should be keeping in mind as the technology is progressing and we’re sort of looking at these choices of do we use the system in this way or that way to kind of help guide these decisions?Sparrow10 years ago, everyone was going home on autonomy. It was all going to be autonomous. And I started asking people, “Would you be willing to build your next set of submarines with no space for human beings on board? Let’s go for an unmanned submersible fleet.” And a whole lot of people who, on paper, were talking about AI’s output . . . autonomous weapon systems outperforming human beings would really balk at that point.How confident would you have to be to say, “We are going to put all our eggs in the unmanned basket for something like the next generation Strike Fighter or submarines.”? And it turns out I couldn’t get many takers for that, which was really interesting. I mean, I was talking to a community of people who, again, all said, “Look, AI is going to outperform human beings.” I said “OK, so let’s just build these systems. There’s no space for a human being on board.” People started to get really cagey.And de-skilling’s a real issue here because if we start to rely on these things then human beings quickly lose the skills. So you might say, “Let’s move forward with minotaur warfighting. But let’s keep, you know, in the back of our minds that we might have to switch back to the human generals if our adversary’s machines are beating our machines.” Well, I’m not sure human generals will actually maintain the skill set if they don’t get to fight real wars. At another level, I think there’s some questions here about the relationship between what we’re fighting for and how we’re fighting.So, say we end up with minotaur warfighting and we get more and more command decisions, as it were, made by machines. What happens if that starts to move back into our government processes? It could either be explicit—hand over the Supreme Court to the robots. Or it could be, in practice, now everything you see in the media is the result of some algorithm. At one level, I do think we need to take seriously these sorts of concerns about what human beings are doing and what decisions human beings are making because the point of victory will be for human beings to lead their lives. Now, all of that said, any given battle, it’s gonna be hard to avoid the thought that the machines are going to be better than us. And so we should hand over to them in order to win that battle.ScharreYeah, I think this question of adoption is such a really interesting one because, like, we’ve been talking about human agency in these tasks. You know, flying a plane or being an infantry or, you know, a general making decisions. But there also is human agency as this question of do you use a technology in this way? And we could see it in lots of examples of AI technology, today—facial recognition for example. There are many different paradigms for how we’re seeing facial recognition used. For example, it’s used very differently in China today than in the United States. Different regulatory environment. Different societal adoption. That’s a choice that society or the government, whoever the powers that be, have.There’s a question of performance, and that’s always, I think, a challenge that militaries have with any new technology is when is it good enough that you go all in on the adoption, right? When are there airplanes, good enough that you then reorient your naval forces around carrier aviation? And that’s a difficult call to make. And if you go too early, you can make mistakes. If you go too late, you can make mistakes. And I think that’s one challenge.It’ll be interesting, I think, to see how militaries approach these things. My observation has been so far, (that) militaries have moved really slowly. Certainly much, much slower that what we’ve seen out in the civilian sector, where if you look at the rhetoric coming out of the Defense Department, they talk about AI a lot. And if you look at actually doing, it’s not very much. It’s pretty thin, in fact. Former Secretary of Defense Mark Esper, when he was the secretary, he had testified and said that AI was his number one priority. But it’s not. When you look at what the Defense Department is spending money on, it’s not even close. It’s about 1 percent of the DoD budget. So, it’s a pretty tiny fraction. And it’s not even in the top 10 for priorities.So, that, I think, is interesting because it drives choices and, historically, you can see that, particularly with things that are relevant to identity, that becomes a big factor in how militaries adopt a technology, whether it’s cavalry officers looking at the tank or when the Navy was transitioning from sail to steam. That was pushed back because sailors climbed the mast and worked the rigging. They weren’t down in the engine room, turning wrenches. That wasn’t what sailors did. And one of the interesting things to me is how these identities, in some cases, can be so powerful to a military service that they even outlast that task itself. We still call the people on ships sailors. They’re not actually climbing the mast or working the riggings; they’re not actually sailors, but we call them that.And so how militaries adopt these technologies, I think, is very much an open question with a lot of significance both from the military effectiveness standpoint and from an ethical standpoint. One of the things that’s super interesting to me that we are talking about some of these games like AI performance in chess and Go and computer games. And what’s interesting is that I think some of the attributes that are valued in games might be different than what the military values.So, when gaming environments, like in computer games like StarCraft and Dota 2, one of the things computers are very, very good at is operating with greater speed and precision than humans. So they’re very good at what’s termed the microplay—basically, the tactics of maneuvering these little artificial units around on this simulated battlefield. They’re effectively invincible in small unit tactics. So, if you let the AI systems play unconstrained, the AI units can dodge enemy fire. They are basically invincible. You have to dumb the AI systems down, then, to play against humans because when these companies, like Open AI or DeepMind, are training these agents, they’re not training them to do that. That’s actually easy. They’re trying to train them to do the longer term planning that humans are doing and processing information and making higher-level strategic decisions.And so they dumb down the speed at which the AI systems are operating. And you do get some really interesting higher-level strategic decision making from these AI systems. So, for example, in chess and Go, the AI systems have come up with new opening moves, in some cases that humans don’t really fully understand, like, why this is a good tactic? Sometimes they’ll be able to make moves that humans don’t fully understand why they’re valuable until further into the game and they could see, oh, that move had a really important change in the position on the board that turned out to be really valuable. And so, you can imagine militaries viewing these advantages quite differently. That something that was fast, that’s the kind of thing that militaries could see value in. OK, it’s got quick reaction times. Something that has higher precision they could see value in.Something where it’s gonna do something spooky and weird, and I don’t really understand why it’s doing it, but in the long run it’ll be valuable, I could see militaries not be excited about at all . . . and really hesitant. These are really interesting questions that militaries are going to have to grapple with and that have all of these important strategic and ethical implications going forward.HostDo you have any final thoughts you’d like to share before we go?SparrowI kind of think that people will be really quick to adopt technologies that save their lives, for instance. Situational awareness/threat assessment. I think that is going to be adopted quite quickly. Targeting systems, I think will be adopted. We can take out an enemy weapon or platform more quickly because we’ve handed over targeting to an AI—I think that stuff will be adopted quite quickly. I think it’s gonna depend where in the institution one is. I’m a big fan of looking at people’s incentive structures. You know, take seriously what people say, but you should always keep in the back of the mind, what would someone like you say?This is a very hard space to be confident in, but I just encourage people not to just talk to the people like them but to take seriously what people lower down the hierarchy think. How they’re experiencing things. That question that Paul raised about do you go early in the hope of getting a decisive advantage or do you go late because you want to be conservative, those are sensible thoughts. As Paul said, it’s still quite early days for military AI. People should be, as they are, paying close attention to what’s happening in Ukraine at the moment, where, as I understand it, there is some targeting now being done by algorithms, and keep talking about it.HostPaul, last word to you, sir.ScharreThank you, Stephanie and Rob for a great conversation, and, Rob, for just a really interesting and thoughtful paper . . . and really provocative. I think the issues that we’re talking about are just really going to be difficult ones for the defense community to struggle with going forward in terms of what are the tasks that should be done by humans versus machines. I do think there’s a lot of really challenging ethical issues.Oftentimes, ethical issues end up getting kind of short shrift because it’s like, well, who cares if we’re going to be minotaurs as long as it works? I think it’s worth pointing out that some of these issues get to the core of professional ethics. The context for war is a particular one, and we have rules for conduct and war (the law of war) that kind of write down what we think appropriate behavior is. But there are also interesting questions about military professional ethics of, like, you know, decisions about the use of force, for example, are the essence of the military profession. What are those things that we want military professionals to be in charge of . . . that we want them to be responsible for? You know, some of the most conservative people I’ve ever spoken to in these issues of autonomy are the military professionals themselves, who don’t want to give up the tasks that they’re doing. And sometimes I think for reasons that are good and make sense, and sometimes, for reasons that I think are a little bit stubborn and pigheaded.SparrowPaul and Stephanie, I know you said last word to Paul, so I wanted to interrupt now rather than at the end. I think it’s worth asking, why would someone join the military in the future? Part of the problem here is a recruitment problem. If you say, “You’re going to be fodder for the machines,” why would people line up for that?You know, that question about military culture is absolutely spot on, but it matters to the effectiveness of the force, as well, because you can’t get people to take on the role. And the other thing is the decision to start a war, I mean, or even to start a conflict, for instance. That’s something that we shouldn’t hand over to the machines, but the same logic that is driving towards battlefield command is driving towards making decisions about first strikes, for instance. And that’s one thing we should resist is that some AI system says now’s the time to strike. For me, that’s a hard line. You don’t start a war on the basis of the choice of the machine. So just some examples, I think, to illustrate the points that Paul was making.Sorry, Paul.ScharreNot at all. All good points. I think these are gonna be the challenging questions going forward, and I think there’s going to be difficult issues ahead to grapple with when we think about how to employ these technologies in a way that’s effective that keep humans in charge and responsible for these kinds of decisions in war.HostThank you both so much.SparrowThanks, Stephanie. And thank you, Paul.ScharreThank you both. Really enjoyed the discussion.HostListeners, you can find the genesisarticle@press.armywarcollege.edu/parameters look for volume 53, issue 1. If you enjoyed this episode of Decisive Point and would like to hear more, you can find us on any major podcast platform.About the authorsPaul Scharre is the executive vice president and director of studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. His first book, Army of None: Autonomous Weapons and the Future of War, won the 2019 Colby Award, was named one of Bill Gates’ top five books of 2018, and was named by The Economist one of the top five books to understand modern warfare. Scharre previously worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He led the Department of Defense (DoD) working group that drafted DoD Directive 3000.09, establishing the department’s policies on autonomy in weapon systems. He also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance programs and directed energy technologies. Scharre was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and secretary-level planning guidance.Robert J. Sparrow is a professor in the philosophy program and an associate investigator in the Australian Research Council Centre of Excellence for Automated Decision-making and Society (CE200100005) at Monash University, Australia, where he works on ethical issues raised by new technologies. He has served as a cochair of the Institute of Electrical and Electronics Engineers Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.
undefined
Jun 22, 2023 • 0sec

Conversations on Strategy Podcast – Ep 21 – C. Anthony Pfaff and Christopher J. Lowrance – Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge

Integrating artificially intelligent technologies for military purposes poses a special challenge. In previous arms races, such as the race to atomic bomb technology during World War II, expertise resided within the Department of Defense. But in the artificial intelligence (AI) arms race, expertise dwells mostly within industry and academia. Also, unlike the development of the bomb, effective employment of AI technology cannot be relegated to a few specialists; almost everyone will have to develop some level of AI and data literacy. Complicating matters is AI-driven systems can be a “black box” in that humans may not be able to explain some output, much less be held accountable for its consequences. This inability to explain coupled with the cession to a machine of some functions normally performed by humans risks the relinquishment of some jurisdiction and, consequently, autonomy to those outside the profession. Ceding jurisdiction could impact the American people’s trust in their military and, thus, its professional standing. To avoid these outcomes, creating and maintaining trust requires integrating knowledge of AI and data science into the military’s professional expertise. This knowledge covers both AI technology and how its use impacts command responsibility; talent management; governance; and the military’s relationship with the US government, the private sector, and society.Read the monograph: https://press.armywarcollege.edu/monographs/959/Keywords: artificial intelligence (AI), data science, lethal targeting, professional expert knowledge, talent management, ethical AI, civil-military relationsEpisode transcript: Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert KnowledgeStephanie Crider (Host)You’re listening to Conversations on Strategy. The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.Joining me today are Doctor C. Anthony Pfaff and Colonel Christopher J. Lowrance, coauthors of Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge with Brie Washburn and Brett Carey.Pfaff, a retired US Army colonel, is the research professor for strategy, the military profession, and ethics at the US Army War College Strategic Studies Institute and a senior nonresident fellow at the Atlantic Council.Colonel Christopher J. Lowrance is the chief autonomous systems engineer at the US Army Artificial Intelligence Integration Center.Your monograph notes that AI literacy is critical to future military readiness. Give us your working definition of AI literacy, please.Dr. C. Anthony PfaffAI literacy is more aimed at our human operators (and that means commanders and staffs, as well as, you know, the operators themselves) able to employ these systems in a way that not only we can optimize the advantage these systems promise but also be accountable for their output. That requires knowing things about how data is properly curated. It will include knowing things about how algorithms work, but, of course, not everyone can become an AI engineer. So, we have to kind of figure out at whatever level, given whatever tasks you have, what do you need to know for these kinds of operations to be intelligent?Col. Christopher J. LowranceI think a big part of it is going to be also educating the workforce. And that goes all the way from senior leaders down to the users of the systems. And so, a critical part of it is understanding how best AI-enabled systems can fit in, their appropriate roles that they can play, and how best they can team or augment soldiers as they complete their task. And so, with that, that’s going to take senior leader education coupled with different levels of technical expertise within the force, especially when it comes to employing and maintaining these types of systems, as well as down to the user that’s going to have to provide some level of feedback to the system as it’s being employed.HostTell me about some of the challenges of integrating AI and data technologies.PfaffWhat we tried to do is sort of look at it from a professional perspective. And from that perspective, so I’ll talk maybe a little bit more later, but, you know, in many ways there are lots of aspects of the challenge that aren’t really that different. We brought on tanks, airplanes, and submarines that all required new knowledge that not only led to changes in how we fight wars and the character of war but corresponding changes to doctrine organizational culture, which we’re seeing with AI.We’ve even seen some of the issues that AI brings up before when we introduce automated technology, which, in reducing the cognitive load on operators introduces concerns like accountability gaps and automation biases that arise because humans are just trusting the machine or don’t understand how the machine is working or how to do the process manually, and, as a result, they’re not able to assess its output. The paradigm example of that, of course, is the USS Vincennes incident, where you have an automated system. Even though there was plenty of information that it was giving that should have caused a human operator not to permit shooting down what ended up being a civilian airliner. So, we’ve dealt with that in the past. AI kind of puts that on steroids.Two of the challenges that I think that are unique to AI, with data-driven systems, they actually can change in capabilities as you use them. For instance, a system that starts off able to identify, perhaps, a few high-value targets, over time, as it collects more data, gets more questions. And as humans see patterns, or as a machine identifies patterns, and humans ask the machine to test it, you’re able to start discerning properties of organizations, both friendly and enemy, you wouldn’t have seen before. And that allows for greater prediction. What that means is that the same system, used in different places with different people with different tasks, are going to be different systems and have different capabilities over time.The other thing that I think is happening is the way it’s changing how we’re able to view the battlefield. Rather than a cycle of Intel driving OPS, driving Intel and so on, with the right kind of sensors in place, getting us the right kind of data, we’re able to get more of a real-time picture. The intel side can make assessments based on friendly situations, and the friendly can make targeting decisions and assessments about their own situation based on intel. So, that’s coming together in ways that are also pretty interesting, and I don’t think we fully wrestled with yet.LowranceYeah, just to echo a couple of things that Dr. Pfaff has alluded to here is that, you know, overarching, I think the challenge is gaining trust in the system. And trust has really earned. And it’s earned through use is one aspect. But you’ve got to walk in being informed, and that’s where the data literacy and the AI literacy piece comes in.And as Dr. Pfaff mentioned, these data-driven systems, generally speaking, will perform based on the type of data that they’ve been trained against and those types of scenarios in which that data was collected. And so, one of the big challenge areas is the adaptation over time. But they are teachable, so to speak. So, as you collect and curate new data examples, you can better inform the systems of how they should adapt over time. And that’s going to be really key to gaining trust. And that’s where the users and the commanders of these systems need to understand some of the limitations of the platforms, their strengths, and understanding also how to retrain or reteach to systems over time using new data so that they can more quickly adapt.But there’s definitely some technical barriers to gaining trust, but they certainly can be overcome with the proper approach.HostWhat else should we consider, then, when it comes to developing trustworthy AI?PfaffWe’ve kind of taken this from the professional perspective, and so we’re starting with an understanding of professions that a profession entails specialized knowledge that’s in service to some social good that allows professionals to exercise autonomy over specific jurisdictions. An example, of course, would be doctors and the medical profession. They have specialized knowledge. They are certified in it by other doctors. They’re able to make medical decisions without nonprofessionals being able to override those.So, the military is the same thing, where we have a particular expertise. And then the question is, how does the introduction of AI affect what counts as expert knowledge? Because that is the core functional imperative of the profession—that is able to provide that service. In that regard, you’re going to look at the system. We need to be able to know, as professionals, if the system is effective. It also is predictable and understandable. I am able to replicate results and understand the ones that I get.We also have to trust the professional. That means the professional has to be certified. And the big question is, as Chris alluded to, in what? But not just certified in the knowledge, but also responsible norms and accountable. The reason for that is clients rely on professionals because they don’t have this knowledge themselves. Generally speaking, the client’s not in the position to judge whether or not that diagnosis, for example, is good or not. They can go out and find another opinion, but they’re going out to go seek another profession. So, clients not only need to trust the expert knows what they’re doing but there’s an ethics that governs them and that they are accountable.Finally, to trust the profession as an institution—that it actually has what’s required to conduct the right kinds of certification, as well as the institutions required to hold professionals accountable. So that’s the big overarching framework in which we’re trying to take up the differences and challenges that AI provides.LowranceLike I mentioned earlier, I think it’s about also getting the soldiers and commanders involved early during the development process and gaining that invaluable feedback. So, it’s kind of an incremental rollout, potentially, of AI-enabled systems is one aspect, or way of looking at it. And so that way you can start to gauge and get a better appreciation and understanding of the strengths of AI and how best it can team with commanders and soldiers as they employ the systems. And that teaming can be adaptive. And I think it’s really important for commanders and soldiers to feel like they can have some level of control of how best to employ AI-enabled systems and some degree of mechanism, let’s say, how much they’re willing to trust at a given moment or instance for the AI system to perform a particular function based on the conditions.As we know as military leaders, the environment can be very dynamic, and conditions change. If you look at the scale of operations from counterinsurgency to a large-scale combat operation, you know those are different ends of a spectrum here of types of conflicts that might be potentially faced by our commanders and our soldiers on the ground with AI-enabled systems. And so, they need to adapt and have some level of control and different trusts of the system based on understanding that system, its limitations, its strengths, and so on.HostYou touched on barriers just a moment ago. Can you expand a little bit more on that piece of it?LowranceOften times when you look at it from a perspective of machine-learning applications, these are algorithms where the system is able to ingest data examples. So basically, historical examples of conditions of past events. And so, just to make this a little bit more tangible, think of an object recognition algorithm that can look at imagery and that (maybe it’s geospatial imagery for satellites that have taken an aerial photo of the ground plane) you could train it to look for certain objects like airplanes. Well, over time, the AI learns to look for these based on the features of these examples within past imagery. With that, sometimes if you take that type of example data and the conditions of the environment change, maybe it’s the backdrop or maybe it’s a different airstrip or different type of airplane or something changes, then performance can degrade to some degree. And this goes back to adaptability.How do these algorithms best adapt? This goes back to the teaming aspect of having users working with the AI recognizing when that performance is starting to degrade, to some degree, kind of through a checks-and-balances type of system. And then you give feedback by curating new examples and having the system adapt. I think giving the soldiers/commanders, for instance, the old analogy of a baseball card with performance statistics of a particular player, where you would have a baseball card for a particular AI-enabled system, giving you the types of training statistics. For example, what kind of scenario was this system trained for? What kind of data examples? How many data examples and so on, and that would give commanders and operators a better sense of these strengths and limitations of the systems, where and under what conditions has it been tested and evaluated. And, therefore, when it’s employed in a condition that doesn’t necessarily meet those kinds of conditions, then that’s an early cue to be more cautious . . . to take a more aggressive teaming stance with the system and checking more rigorously, obviously, what the AI is potentially predicting or recommending to the soldiers and operators.And that’s one example. I think you’ve got to have the context where, most instances, depending on the type of AI application, if you will, really drives how much control or task effort you’re going to give to the AI system. In some instances, as we see on the commercial sector today, there’s a high degree of autonomy given to some AI systems that are recommending, for instance, what you maybe want to purchase or what movie you should shop for and so on, but what’s the risk of employing that type of system or if that system makes a mistake? And I think that’s really important is the context here and then having the right precautions and the right level of teaming in place when you’re going into those more risky types of situations.And I think another final point of the barriers to help overcome them is, again, going back to this notion of giving commanders and soldiers some degree of control over the system. A good analogy is like a rheostat knob. Based on the conditions on the ground. Based on their past use of this system and their understanding, they start to gain an understanding of the strengths and limitations of the system and then, based on the conditions, can really dial up or dial down the degree of autonomy that they’re willing to grant the system. And I think this is another way of overcoming barriers to, let’s say, highly restricting the use of AI-enabled systems, especially when they’re recognizing targets or threats as part of the targeting cycle, and that’s one of the lenses that we looked at in this particular study.PfaffWhen we’re looking at expert knowledge, we break it into four components—the technical part, which we’ve covered. But we also look at, to have that profession, professionals have to engage in human development, which means recruiting the right kinds of people, training and educating the right kinds of ways, and then develop them over a career to be leaders in the field. And we’ve already talked about the importance of having norms that ensure the trust of the client. Then there’s the political, which stresses mostly how the professions maintain legitimacy and compete for jurisdiction with other professions. (These are) all issues that AI brings up. So those introduce a number of other kinds of concerns that you have to be able to take into account for any of the kinds of things that Chris talked about for us to be able to do that. So, I would say growing the institution along those four avenues that I talked about represents a set of barriers that need to be overcome.HostLet’s talk about ethics and politics in relation to AI in the military. What do we need to consider here?PfaffIt’s about the trust of the client, but that needs to be amplified a little bit. What’s the client trusting us to do? Not only use this knowledge on their behalf, but also the way that reflects their values. That means systems that conform to the law of armed conflict. Systems that enable humane and humanitarian decision making—even in high intensity combat. The big concerns there, (include) the issue(s) of accountability and automation bias. Accountability arises because there’s only so much you’re going to be able to understand about the system as a whole. And when we’re talking about the system, it’s not just the data and the algorithms, it’s the whole thing, from sensors to operators. So, it will always be a little bit of a black box. If you don’t understand what’s going on, or if you get rushed (and war does come with a sense of urgency) you’re going to be tempted to go with the results the machine produces.Our recommendation is to create some kind of interface. We use the idea of fuzzy logic that allows the system and humans to interact with it to identify specific targets in multiple sets. The idea was . . . given any particular risk tolerance the commander has because machines when they produce these outputs, they assign a probability to it . . . so for example, if it identifies a tank, it will say something to the effect of “80% tank.” So, if I have a high-risk tolerance for potential collateral harms, risk emission, or whatever, and I have a very high confidence that the target I’m about to shoot as legitimate, I can let the machine do more of the work. And with a fuzzy logic controller, you can use that to determine where in the system humans need to intervene when that risk tolerance changes or that confidence changes. And this addresses accountability because it specifies what commander, staff, and operator are accountable for—getting the risk assessment right, as well as ensuring that the data is properly curated and the algorithms trained.It helps with automation bias because the machine’s telling you what level of confidence it has. So, it’s giving you prompts to recheck it should there be any kinds of doubts. And one of the ways you can enhance that, that we talked about in the monograph, is in addition to looking for things that you want to shoot, also look for things you don’t want to shoot. That’ll paint a better picture of the environment, (and) overall reduce the kind of risk of using these systems.Now when it comes to politics, you’ve got a couple of issues here. One is at the level of civ-mil relations. And Peter Singer brought this up 10 years ago when talking about drones. His concern was that drone operation would be better done by private-sector contractors. As we rely more on drones, what it came to mean in applying military force would largely be taken over by contractors and, thus, expert knowledge leaves the profession and goes somewhere else. And that’s going to undermine the credibility and legitimacy of the profession with political implications.That didn’t exactly happen because military operators always retained the ability to do this. The only ones who are authorized to use these systems with lethal force. There were some contractors augmenting them, but with AI right now, as we sort through what the private sector/government roles and expertise is going to be, we have a situation where you could end up . . . one strategy of doing this is that the military expert knowledge doesn’t change, all the data science algorithms are going on on the other side of an interface where the interface just presents information that the military operator needs to know, and he responds to that information without really completely understanding how it got there in the first place. I think that’s a concern because that is when expertise migrates outside the profession. It also puts the operators, commanders, and staffs in a position where (A.) they will not necessarily be able to assess the results well without some level of understanding. They also won’t be able to optimize the system as its capabilities develop over time.We want to be careful about that because, in the end, the big thing in this issue is expectation management. Because these are risk-reducing technologies . . . because they’re more precise, they lower risk to friendly soldiers, as well as civilians and so on. So, we want them to make sure that we are able to set the right kinds of expectations, which will be a thing senior militaries have to do. And regarding the effectives of the technology, so civilian leaders don’t over rely on it, and the public doesn’t become frustrated by lack of results when it doesn’t quite work out. Because the military, they can’t deliver results but also imposes any risk to soldiers and noncombatants alike is not one that’s probably going to be trusted.LowranceRegarding ethics and politics and relations to AI and the military, I think it’s really important, obviously, throughout the development cycle of an AI system, that you’re taking these types of considerations in early and, obviously, often. So, I know one guiding principle that we have here is that if you break down an AI system across a stack all the way from the hardware to the data to the model and then to deployment in the application, really ethics wraps all of that.So, it’s really important that the guiding principles already set forth through various documents from DoD and the Army regarding responsible AI and employment that that is followed in the hereto. Now, in terms of what we looked at from the paper, from the political lens, it’s an interesting dynamic when you start looking at the interaction between the employment of these systems. And really from the sense of, let’s say, of urgency of at least leveraging this technology from either a bottom-up or a top-down type of fashion. So, what I mean by that is from a research and development perspective, you know, there’s an S and T (or science and technology) base that really leads the armies—and really DoD if you look outside from a joint perspective the development of new systems. But yet, as you know, the commercial sector is leveraging AI now, today, and sometimes there’s a sense of urgency. It’s like, hey, it’s mature enough in these types of aspects. Let’s go ahead and start leveraging it.And so, a more deliberate approach would be traditional rollout through the S and T environment where it goes through rigorous test and evaluation processes and then eventually becomes a program of record and then deployed and fielded. Whereas it doesn’t necessarily prohibit a unit right now that obviously says, “Hey, I can take this commercial off-the-shelf AI system and start leveraging it and go ahead and get some early experience.” So, I think there’s this interesting aspect between the traditional program of record acquisition effort versus this kind of bottom-up unit level experimentation and how those are blending together.And it also brings up the role, I think, of soldiers and, let’s say, contractors play in terms of developing and eventually deploying and employing AI-enabled systems. You know, inherently AI-enabled systems are complex, and so who has the requisite skills to sustain, update, and adapt these systems over time? Is it the contractor, or should it be the soldiers? And where does that take place? We’ve looked at different aspects of this in this study, and there’s probably a combination, a hybrid.But one part of the study is we talked about the workforce development program and how important that is because in tactical field environments, you’re not necessarily always going to be able to have contractors out present in these field sites. Nor are you going to have, always, the luxury of high bandwidth communications out to the tactical edge where these AI-enabled systems are being employed. Because of that, you’re going to have to have the ability to have that technical knowledge of updating and adapting AI-enabled systems with the soldiers. That’s one thing we definitely emphasized as part of the study of these kinds of relationships.HostWould you like to share any final thoughts before we go?LowranceOne thing I would just like to reemphasize again is this ability that we can overcome some of these technical barriers that we discussed throughout the paper. But we can do so deliberately, obviously, and responsibly. Part of that is, we think, and this is what one of our big findings from our study is, that from taking an adaptive teaming approach. We know that AI inherently, and especially in a targeting cycle application, is an augmentation tool. It’s going to be paired with soldiers. It’s not going to be just running autonomously by itself. What does that teaming look like? It goes back to this notion of giving control down to the commander level, and that’s where that trust is going to start to come in, where if the commander on the ground knows that he can change the system behavior, or change that teaming aspect that is taking place, and the level of teaming, that inherently is going to grow the amount of trust that he or she has in the system during its application.We briefly talked a little bit about that, but I just want to echo, or reinforce, that. And it’s this concept of an explainable fuzzy logic controller. And the big two inputs to that controller are what is the risk tolerance of the commander based on the conditions of the ground, whether it’s counterinsurgency or large-scale combat operations versus what the AI system is telling them, Generally speaking, in most predictive applications, the AI has some degree of confidence score associated with its prediction or recommendation. So, leverage that. And leverage the combination of those. And that should give you an indication of how much trust or how much teaming, in other words, you know, for a given function or role, should take place with this AI augmentation and between the soldier and the actual AI augmentation tool that’s taking place.This can be broken down, obviously, in stages just like the targeting cycle is. And our targeting cycle and joint doctrine is, for dynamic targeting, as F2T2 EA. Find fix, track, target, engage, and assess. And each one of those, obviously more some than others, is where AI can play a constructive role. We can employ it in a role where we’re doing so responsibly and it’s providing an advantage, in some instances augmenting the soldiers in such a way that really exceeds the performance a human alone could do. And that deals with speed, for example. Or finding those really hidden types of targets, these kinds of things that would be even difficult for human to do alone. Taking that adaptive teaming lens is going to be really important moving forward.PfaffWhen it comes to employing AI, particularly for military purposes, there’s a concern that the sense of urgency that comes with combat operations will overwhelm the human ability to control the machine. We will always want to rely on the speed. And like Chris said, you don’t get the best performance out of the machine that way.It really is all about teaming. And none of the barriers that we talked about, none of the challenges we talked about, are even remotely insurmountable. But these are the kinds of things you have to pay attention to. There is a learning curve, and to engage in strategies that minimize the amount of adaptation members of the military going to have to perform, I think it will be a mistake in the long term even to get short-term results.HostListeners, you can learn more about this, if you want to really dig into the details here, you can download the monograph at press.armywarcollege.edu/monographs/959. Dr. Pfaff, Col. Lowrance, thank you so much for your time today.PfaffThank you, Stephanie. It’s great to be here.HostIf you enjoyed this episode and would like to hear more, you can find us on any major podcast platform.About the Project Director Dr. C. Anthony Pfaff (colonel, US Army retired) is the research professor for strategy, the military profession, and ethics at the US Army War College Strategic Studies Institute and a senior nonresident fellow at the Atlantic Council. He is the author of several articles on ethics and disruptive technologies, such as “The Ethics of Acquiring Disruptive Military Technologies,” published in the Texas National Security Review. Pfaff holds a bachelor’s degree in philosophy and economics from Washington and Lee University, a master’s degree in philosophy from Stanford University (with a concentration in philosophy of science), a master’s degree in national resource management from the Dwight D. Eisenhower School for National Security and Resource Strategy, and a doctorate degree in philosophy from Georgetown University.About the Researchers Lieutenant Colonel Christopher J. Lowrance is the chief autonomous systems engineer at the US Army Artificial Intelligence Integration Center. He holds a doctorate degree in computer science and engineering from the University of Louisville, a master’s degree in electrical engineering from The George Washington University, a master’s degree in strategic studies from the US Army War College, and a bachelor’s degree in electrical engineering from the Virginia Military Institute.Lieutenant Colonel Bre M. Washburn is a US Army military intelligence officer with over 19 years serving in tactical, operational, and strategic units. Her interests include development and mentorship; diversity, equity, and inclusion; and the digital transformation of Army intelligence forces. Washburn is a 2003 graduate of the United States Military Academy and a Marshall and Harry S. Truman scholar. She holds master’s degrees in international security studies, national security studies, and war studies.Lieutenant Colonel Brett A. Carey, US Army, is a nuclear and counter weapons of mass destruction (functional area 52) officer with more than 33 years of service, including 15 years as an explosive ordnance disposal technician, both enlisted and officer. He is an action officer at the Office of the Under Secretary of Defense for Policy (homeland defense integration and defense support of civil authorities). He holds a master of science degree in mechanical engineering with a specialization in explosives engineering from the New Mexico Institute of Mining and Technology.
undefined
May 22, 2023 • 0sec

Conversations on Strategy Podcast – Ep 20 – Dr. Roger Cliff – China’s Future Military Capabilities

The 2022 National Defense Strategy of the United States of America identifies China as the “pacing challenge” for the US military. This podcast examines the process by which China’s military capabilities are developed, the capabilities China’s military is seeking to acquire in the future, and the resulting implications for the US military. To date, all the extant studies have merely described the capabilities the People’s Liberation Army is currently acquiring. The monograph goes further by drawing on the Chinese military’s publications to identify and discuss the capabilities the People’s Liberation Army seeks to acquire in the future. The monograph finds China’s military is engaged in a comprehensive program to field a dominant array of military capabilities for ground, sea, air, space, and cyberspace warfare. Countering these capabilities will require the United States and its allies to engage in an equally comprehensive effort. The monograph’s findings will enable US military planners and policy practitioners to understand the long-term goals of China’s development of military capabilities and to anticipate and counter China’s realization of new capabilities so the United States can maintain its military advantage over the long term.Read the monograph: https://press.armywarcollege.edu/monographs/960/Keywords: China, PLA, People’s Liberation Army, cyber warfare, spaceEpisode Transcript: China’s Military CapabilitiesStephanie Crider (Host)You’re listening to Conversations on Strategy. The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army, War College, or any other agency of the US government.Joining me today is Dr. Roger Cliff, a senior intelligence officer and former research professor of Indo-Pacific affairs in the Strategic Studies Institute at the US Army War College. He’s the author of China’s Future Military Capabilities.It’s great to talk with you again, Roger. Thank you for making time to speak with me.Dr. Roger CliffI’m glad to have this opportunity.HostLet’s get right to it. Why did you write this monograph? Why now?CliffThis monograph was prompted by my observation that many of the US Army’s long-term planning documents had set the year 2035 as the target for the capabilities that they described the Army seeking to develop. And that struck me because the Chinese military has also identified 2035 as the target year for its modernization program. So, they have a three-step program, the first step of which I guess is now complete, which was to largely complete the process of mechanization by 2020 and then to have basically completed its overall modernization progress by 2035 and then to become a world-class military by mid-century. So, I was struck by the coincidence that both the (US) Army and the Chinese military had chosen 2035 as their target years.HostWhat do we know about China’s process for developing military capabilities?CliffWe actually know quite a bit about this process. It starts with the issuing of what are called the military strategic guidelines. These are a set of principles that the Chinese top leadership issues that describe the types of military conflicts the Chinese military needs to prepare for, who the most likely adversaries are, and what the nature of future military conflict is likely to be. They are not issued on a regular basis. They’re issued whenever the leadership feels like they need to be revised or reissued. The most recent revision occurred in 2019. Prior to that, it happened in 2014, 2004, and 1993. So, you can see there isn’t any specific pattern other than I t generally happens about once every 5–10 years. The rest of the process, however, is quite regularized, and it’s tied to the Chinese government’s overall 5-year plan cycle. So, every 5 years, each of the services and the Chinese military issues an overall service strategy, which looks out at the next 20 years and the types of capabilities and port structure the service is going to need over that period. And then, based on that strategy, 10-year plans and 5-year programs are developed. And then, finally, based on those, the specific budgets in terms of research and development, equipment acquisition, and so on are issued for each individual year.HostBased on your current research, can you give us an idea of what China’s future military might look like?CliffSo, the Chinese military in the future is going to look quite a bit like the US military. And for the US military of today, in particular, they are seeking to acquire many of the same capabilities that we have. Up until today, they have been largely focused on potential conflicts in their backyard, if you will, but they are developing more and more in the direction of being a global military power with long-term power projection assets like aircraft carriers, long-range bombers, aerial refueling aircraft, and those sorts of things. They really seem to aspire to be a military that is in many ways comparable and, of course, hopefully, from their perspective, one day, superior to the US military.HostHow can the United States and its allies, then, prepare for and counter the PLA of the future?CliffThe most important thing to do is recognize what the PLA’s long-term goals are. And by PLA, I mean the People’s Liberation Army, which is what the Chinese military calls itself. I think there’s a lot of focus from the US perspective on the current capabilities of the Chinese military or those capabilities likely to emerge in the next few years. The problem with that approach is for the US to develop capabilities takes much more than just a few years, and the same is true in the Chinese military. So, we need to look at where they’re going over the longer term and not be developing a military to counter the Chinese military of today because when we get to 2035, they will be quite a bit different than they are today. So that’s maybe the most basic principle. But, as I said, the US needs to start thinking about a world in which the Chinese military isn’t going to be just a regional power but a global power. And the US military is likely to encounter the Chinese military increasingly around the world. So this is likely to develop into a global contest for military superiority in which both of the nations are projecting power far abroad. Now, the US has been doing that for many years, but we’ve also become accustomed to being the only nation that’s doing that, especially since the end of the Cold War. And those days are coming to an end. We are going to see a Chinese military in the future, assuming everything goes according to plan, that is very much a worldwide rival to the US military.HostWhat else do we need to know or consider?CliffAn important thing to recognize is that a lot of where the Chinese military is going isn’t really a mystery. If you look into their own publications, they tell us what they are planning on doing. They don’t make nearly as many things public as the US military does, so you can’t go on the worldwide web and download all those planning documents that I talked about earlier, but if you look at textbooks, the white papers that the Chinese military publishes periodically, and so on, you can get a pretty good sense of what their intentions are. We need to take those documents seriously and start to prepare now based on what they’re telling us they intend to do in the future.HostDo you have any final thoughts you’d like to share before we go?CliffI just want to thank the War College for the opportunity to do this kind of research. This is the type of long-term, in-depth research that one cannot do at very many places. For me, it was tremendously satisfying to have this opportunity, but I think, also, it shows if we devote the time and resources to analyzing the publications of the Chinese military, it’s possible to learn quite a bit of value to the US military-zone planning processes.HostIf you’re interested in reading the monograph, you’ll find it at press.armywarcollege.edu/parameters/monographs, and it’s called China’s Future Mililitary Capabilities. Roger, it’s always a pleasure working with you. Thank you so much.CliffThank you. It’s great to be here.HostIf you enjoyed this episode and would like to hear more, you can find us on any major podcast platform. About the author: Roger Cliff is a senior intelligence officer and former research professor of Indo-Pacific affairs in the Strategic Studies Institute at the US Army War College. His research focuses on China’s military strategy and capabilities and their implications for US strategy and policy. He previously worked for the Center for Naval Analyses, the Atlantic Council, the Project 2049 Institute, the RAND Corporation, and the Office of the Secretary of Defense. He holds a PhD in international relations from Princeton University; a master of arts degree in Chinese studies from the University of California, San Diego; and a bachelor of science degree in physics from Harvey Mudd College. He is fluent in spoken and written Mandarin Chinese
undefined
May 11, 2023 • 0sec

Conversations on Strategy Podcast – Ep 19 – Zenel Garcia and Kevin Modlin – Revisiting “Sino-Russian Relations and the War in Ukraine”

In this podcast, Zenel Garcia and Kevin Modlin draw on recent visits of Chinese officials to Russia to support their contention that Sino-Russian relations are a narrow partnership centered on accelerating the emergence of a multipolar order to reduce American hegemony and illustrate this point by tracing the discursive and empirical foundations of the relationship. Additionally, they highlight how the war has created challenges and opportunities for China’s other strategic interests.Read the article here: https://press.armywarcollege.edu/parameters/vol52/iss3/4/Download the full episode transcript here: https://media.defense.gov/2023/Nov/15/2003341254/-1/-1/0/COS-19-PODCAST-TRANSCRIPT_GARCIA_MODLIN.PDFKeywords: China, Russia, Ukraine war, strategic partnership, multipolarityAbout the authors: Dr. Zenel Garcia is an associate professor of security studies in the Department of National Security and Strategy at the US Army War College. His research focuses on the intersection of international relations theory, security, and geopolitics in the Indo-Pacific and Eurasia.Dr. Kevin D. Modlin is an instructor at Western Kentucky University, where his research interests focus on security studies and international political economy. He holds a PhD in international relations from Florida International University and a master’s degree in economics from Western Kentucky University. He also served as a senior legislative aide for retired Congressman Ron Lewis.
undefined
May 10, 2023 • 0sec

Conversations on Strategy Podcast – Ep 18 – Lukas Cox – On “Countering Terrorism on Tomorrow’s Battlefield and Critical Infrastructure Security and Resiliency”

In this episode of Conversations on Strategy, Lucas Cox shares his thoughts on being an intern working on two collaborative studies for NATO.Read the collaborative study Countering Terrorism on Tomorrow’s Battlefield: Critical Infrastructure Security and Resiliency (NATO COE-DAT Handbook 2) here.Read the collaborative study What Ukraine Taught NATO about Hybrid Warfare here: https://press.armywarcollege.edu/monographs/956Episode Transcript: On Countering Terrorism on Tomorrow’s Battlefield and Critical Infrastructure Security and ResiliencyStephanie Crider (Host)You’re listening to Conversations on Strategy. The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army, War College, or any other agency of the US government.Today I’m talking with Lucas Cox, who at the time of this recording was an intern with the Strategic Studies Institute and a graduate of the University of Washington’s Henry M. Jackson School of International Studies. He assisted with two collaborative studies: What the Ukraine, Taught NATO About Hybrid Warfare and Countering Terrorism on Tomorrow’s Battlefield: Critical Infrastructure Security Resiliency.Welcome, Lucas.Lucas CoxIt’s a pleasure to be here. Thank you.HostTell us how you ended up working on not one, but two books for the Army War College.CoxSo, this is all a great opportunity from my dear professor and mentor Dr. Sarah Lohmann. She’s a University of Washington professor at the Jackson School, which is where I got my undergrad in international studies. And so, we do this great project called “the task force.” It’s sort of a capstone project. And it’s a great opportunity to work as a team and to get into the real sort of meat of policy issues and present our findings to actually someone on the ground, someone that’s actually in the field, which is something that you don’t really get at four years in the university, especially in Washington state where we’re away from the the policy world.And so, I had the privilege of being in her task force and being chosen as the chief liaison for our task force to deal with NATO Center of Excellence for Defense Against Terrorism (COE-DAT), as well as everyone here at SSI under the guidance of Dr. Carol Evans. That led to me leading the writing of the first chapter of this main book.I was able to present our findings on that chapter remotely at two conferences in Turkey at the COE-DAT at conferences over there and there’s another one coming up in October, which I’d love to attend as well. And so that led me to the great opportunity that Dr. Evans and Dr. Lohmann said, “Why don’t you come aboard and keep working on these projects and sort of see the project through for that book at least?”And then the energy security hybrid warfare book is another project of Dr. Lohmann’s that she’s been working on for the last couple of years, at least, with NATO Science and Technology Organization. Those are two simultaneous projects, and I volunteered to help in any way I could with those. It’s been really exciting.HostIt sounds exciting. What do you see as the most important take away from the chapter you wrote for the critical infrastructure book?CoxI had the great pleasure of wrapping up my internship here over in Upton Hall at the US Army War College, and I chose the issue of foreign acquisition of European infrastructure. And so, this is an issue that has to do . . . it’s continent wide . . . it has to do with the EU and with NATO and with the US, as well. Is that over the past few decades, a lot of critical infrastructure (and when we say that a lot of it is infrastructure that’s needed for military operations), it’s become privatized, which is great for competition and consumer choice and innovation and all that stuff. But it also means that sometimes you sacrifice resilience and redundancy for profit and price in a way that you wouldn’t if it were under government leadership with the security apparatus in place. And more than that, since . . . mostly since the early 2000s, a lot of that has come under foreign control.So, you think about Russian gas pipelines and being able to get a hold on an energy supply for Europe because a lot of not only the gas and the oil, but the infrastructure that delivers it, is at least partly owned by Russian companies. And so, there’s that as well as a lot of Chinese firms are coming into Europe and buying infrastructure and constructing ports. It’s part of that Belt and Road Initiative that is so in the news. It’s a huge decades-long project for the PRC (People’s Republic of China). A lot of those concerns come from the closeness or direct supervision of these firms from the Chinese government and fears that either through direct control or through political influence or predatory financing, especially in countries that are strapped for cash and need new infrastructure, that those pieces of critical infrastructure being under control of Russia and China pose real threats to their usability and their reliability for European defense. And a lot of these points are a port or a railway where if that goes down or that’s unable to be used, then a whole NATO or US or local military mission could collapse. We made a few policy recommendations for NATO to take a more assertive role as an advisor and as a supervisor working together with the EU because the EU is the one that has authority over laws and regulations in Europe but NATO also having an important role to play in, hopefully, guiding that process in a way that local governments can’t or don’t when they have their own local standards that may not be up to snuff.HostWhat was your experience like doing analyst work for the first time and on such an important project?CoxIt was daunting, but also really exciting. Probably my favorite thing, despite all of the crazy deadlines and the 300 pages of spellchecking that I just came here from doing was really the delegation of Dr. Lohman to me to be able to do some of the real important work. It took me a little bit by surprise, but definitely not surprised by her trust in me—and her guidance.So, a previous intern had constructed these maps in the hybrid warfare energy security book where we’re looking at vital points of European infrastructure for each of the 12 case studies that authors have written. And so there were, say, ports or energy grids or pipelines detailed on these maps, and we were assigned with giving them a threat assessment, are these under cyber risk or disinformation risk in a time period of six months, a year, two years. That was especially difficult being assigned that and, for example, here are all these energy grids and wind turbines and nuclear plants in Germany and Poland and Belgium. And my job was to learn as much as I could about them, learn as much about the overall security situation and come up with a threat assessment—whether these places were going to be attacked in six months by Russian cyber operations or disinformation. And so that was really important work to do for an intern. But I was very honored to have that role and, going forward, hopefully in my career will be sort of a great foundational experience.HostWhat’s next for you? What are your future plans?CoxI am finishing up here at the Army War College, going home to Seattle, and then I’m going to be traveling a bit starting in September, ultimately to end up in Brussels working as an intern, which this experience allowed me to do with the Science and Technology Organization, which is the outfit that is overseeing and partnering with us for that hybrid warfare energy security book.I am very excited for all the work that they do. I know it’s a small office in Brussels, sort of in the middle of the action at NATO headquarters, which is very exciting for me. It’s been a dream to work for that organization for a long time and then after that we’ll see.HostThis was a real treat. Thanks, LucasCoxIt’s so nice to talk to you.HostListeners, if you’d like to read the collaborative studies, visit press.armywarcollege.edu/monographs. If you enjoyed this episode and would like to hear more, you can find us on any major podcast platform.About the author: Lucas M. Cox, at the time of writing this publication, was an intern with the Strategic Studies Institute at the US Army War College and a graduate of the University of Washington Henry M. Jackson School of International Studies with a degree in international security, foreign policy, peace, and diplomacy and a double minor in political science and Russian, Eastern European, and Central Asian studies with a focus on the former Soviet economic and security spheres. He is also the 2023 University of Washington Triana Deines Rome Center Intern and will begin an internship at NATO’s Science and Technology Organization in April 2023.
undefined
Mar 29, 2023 • 0sec

Conversations on Strategy Podcast – Ep 16 – Dr. Heather S. Gregg and Dr. James D. Scudieri – On “The Grand Strategy of Gertrude Bell” - From the Arab Bureau to the Creation of Iraq

The remarkable life of early-twentieth-century British adventurer Gertrude Bell has been well documented through her biographies and numerous travel books. Bell’s role as a grand strategist for the British government in the Middle East during World War I and the postwar period, however, is surprisingly understudied. Investigating Gertrude Bell as both a military strategist and a grand strategist offers important insights into how Great Britain devised its military strategy in the Middle East during World War I—particularly, Britain’s efforts to work through saboteurs and secret societies to undermine the Ottoman Empire during the war and the country’s attempts to stabilize the region after the war through the creation of the modern state of Iraq. As importantly, studying the life and work of Bell offers a glimpse into how this unique woman was able to become one of the principal architects of British strategy at this time and the extraordinary set of skills and perspectives she brought to these efforts—particularly, her ability to make and maintain relationships with key individuals. Bell’s life and work offer insights into the roles women have played and continue to play as influencers of grand strategy.Read the monograph here.Episode Transcript: On The Grand Strategy of Gertrude Bell: From the Arab Bureau to the Creation of IraqStephanie Crider (Host)You’re listening to Conversations on Strategy. The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.Conversations on Strategy welcomes doctors Heather Gregg and Jim Scudieri. Gregg is a professor of irregular warfare at the George C. Marshall European Center for Security and the author of The Grand Strategy of Gertrude Bell: From the Arab Bureau to the Creation of Iraq.Scudieri is the senior research historian at the Strategic Studies Institute. He’s an associate professor and historian at the US Army War College. He analyzes historical insights for today’s strategic issues.Heather, Jim, thanks so much for being. Here I’m really excited to talk to you today.Dr. Heather S. GreggIt’s great to be here. Thank you so much.Dr. James D. ScudieriLikewise, thank you for taking the time to meet with us.HostWhat did the Middle East look like in the lead up to World War I? Who were the major players in the region?GreggUnlike the Western Front, the war was very different in the Middle East. And I would say this was a big game of influence. And you had major European powers. You had a declining Ottoman Empire. You had the rise of Arab nationalism. And all of this kind of came into a very interesting confluence of events during World War I.ScudieriAnd complicating that amongst major players are … the British don’t have a unified position, so if you look at stakeholders, you need to distinguish between the British leaders in London, those in Cairo, and those in India.GreggThat’s a huge point that there is a great power struggle between these three entities over who should be controlling the Middle East and why. And this becomes important for the story of Gertrude Bell.HostThe manuscript is divided into three periods—during World War I, the period of British military occupation of Mesopotamia, and Britain’s creation of the State of Iraq during the mandate era. Let’s discuss British military and grand strategy in each period. What was British military strategy in the Middle East during World War I?ScudieriSo, there’s still a lot of historical debate on exactly what the strategy was. Some would say there wasn’t much of a strategy, but part of that is strategic games changed as the war progressed, and the war was not going well for the Allies in the early years. And even through 1917 there was a concern that they might lose. So those strategic objectives in the Middle East change as they determine that they will not lose. And not only that, but if you win, what do you want the post-war world to look like?GreggSo yeah, I would add to this that there were some really interesting constraints on Britain and other actors. They didn’t have the manpower to put into the Middle East because it was all being dedicated to the Western Front—or most of it was. They weren’t entirely sure, I would echo Jim’s comments here, about what the strategy should be, just that they wanted to frustrate and try to undermine Ottoman authority in the region. They devised a strategy that worked with and through the Arab population to try to undermine Ottoman authority. So, this is what we would call an unconventional warfare strategy today. But that was supposed to be cheaper and require less manpower than actually deploying British troops, and this is particularly true after what happened at Gallipoli, (which was) for all intents and purposes, a pretty colossal failure.HostSo, this whole podcast is built on your monograph about Gertrude Bell. Let’s talk about her a little bit. How did Gertrude Bell contribute to the unconventional warfare strategy Britain created?GreggGertrude Bell is a fascinating individual. She was a British national. She was one of the first women to go to university at Oxford University. She got a First Class in modern history. She spoke languages. She traveled throughout the region. And she was hired first by the British Admiralty but then became part of a small group in Cairo called the Arab Bureau. And their job was to devise some sort of strategy to undermine Ottoman authority. And there she worked with someone we all know—T. E. Lawrence, known as Lawrence of Arabia. And together, with a small team of between 7 to 15 people, they helped devise this unconventional warfare strategy of working by, with, and through local Arab leaders to try to undermine Ottoman authority.ScudieriShe’s a fascinating character because it reminds historians that you cannot predict the future. You cannot predict it with regard to strategy; you also can’t predict it, with regard to some individuals’ career paths.HostWhy did the initial plan not succeed? How did they adjust it?GreggSo, there was this effort to work through the Sharif of Mecca. This was a family that was in charge of the two holy sites in Mecca and Medina. The father’s name was Hussein, and he had two sons that were very active in trying to foment an uprising within the Ottoman military with Arab officers. Hussein promised that there were hundreds and hundreds of Arab officers that were part of secret societies that he could encourage to rise up against the Ottoman Empire. And it ended up that this just wasn’t true. He over promised what he could achieve. The strategy was largely unsuccessful, this initial strategy.ScudieriThis experience highlights how nothing is easy, and things are hard.HostSo true.ScudieriThe ability to have British support brings not only weapons and equipment, but it brings lots of money.GreggAnd with that, the potential for corruption, making promises to get money to get weapons. And Britain promised, in a series of correspondence between McMahon and Hussein that he would have his own independent Arab state after the war in exchange for this uprising, which, in about a year’s time, did not succeed.So, the second approach was T. E. Lawrence and Hussein’s son decided to engage in basically sabotage against lines of communication, particularly railway lines. And this is what the famous movie Lawrence of Arabia captures. And this was more successful in combination with other things that were dragging down the Ottoman Empire.ScudieriThe success of the strategy underlines how sometimes a better approach is counterintuitive because by focusing on the sabotage, they wanted to starve the Turkish forces in the area of resupply versus the more traditional trying to focus on annihilating the enemy army, which they did not have the power to do.GreggA really interesting observation. And a lesson that still holds today.HostThe British military successfully captured Baghdad in March of 1917, along with Basra, which it captured in 1914. It put two of the three Ottoman vilayets of Mesopotamia under its control. How did Belle help shape British military strategy to address this reality?GreggSo, I would like to echo back Jim’s point that, fascinatingly enough, it seemed that Britain had not devised a strategy for military occupation, even though this became their goal—to take Baghdad. And then they already had Basra. And so, Bell, together with someone named Percy Cox, had to very quickly devise a strategy of, essentially, occupation. And this also didn’t go necessarily well, and I think it forced them (until the mandate era) to really try to keep things in line rather than make things prosper. I don’t know, Jim, what your thoughts are on that.ScudieriSo, mine would be very similar. It’s interesting in some of the primary sources we can see how relatively rapidly the British put together an occupation plan and also tried to pool available talent. And they get by in the course of the war. But the challenges associated with long-term occupation and that transition to mandate, and then some missteps, really blow up after the war.HostWhat were some of the challenges and opportunities in this period?GreggI would say some of the really interesting challenges were also opportunities that might have been missed. So, there was some local leadership and local talent that I think could have been very useful had the British reached out and engaged some of that leadership. From my read of Gertrude Bell, she was rather suspicious of the Shia population and Shia leaders. So, there were some missed opportunities to try to engage the Shia population, which was a good chunk of the population that they controlled. And so, for me, both the big challenge and the missed opportunity was what to do with the local population (and) how to engage the local population and harness local leadership.ScudieriThere’s also some confusion associated with thinking in terms of Arab kingdoms because there’s no unitary Arab nationalism right now. The Kingdoms of British support in the post-war period are really Hashemite. And that doesn’t take account of a very conflicting sense of loyalty to various different tribes and ethnicities, and so on and so forth. And perhaps the biggest one is a difference between the Hashemites and the House of Saud.GreggJust to build on this, and this is an excellent point . . . this was a really interesting decision that Gertrude Bell and T. E. Lawrence actually made, which was to engage Faisal, who was the son of Hussein. And to promote him to be the first king of Iraq. And as Jim just mentioned, he was a Hashemite. He had never actually been to Iraq and was given this leadership position. The British gave him that, and this ended up being a really difficult thing . . . so bypassing local leadership and choosing to engage the leaders they knew as opposed to the leaders, the local people knew.ScudieriThe British also confronted a major problem in the post-war discussions, and that was as they now win the war, and they’re trying to come up with these friendly kingdoms, they have big issues with what are those borders going to look like with France. Their long wartime ally is now going to be a post-war if not adversary, there’s some major post-war disagreements, and you can see that by looking at the documents that talk about (1) The Mosul vilayet, which had unclear borders. At first it wasn’t even clear if that area would be part of Iraq, and if so, where the border would end. And likewise with the borders with Palestine.GreggThis is a really excellent point because then you had the birth of the Republic of Turkey and Atatürk, who also made claim to Mosul. So, you add a really interesting scramble over borders. Over territory. Overlapping claims and rights to it. This was a huge mess that took, in many cases, decades to sort out. Some would argue some of this is still being sorted out.ScudieriA good example of what kind of a wicked problem all of this became was most folks will talk about the Treaty of Versailles, but it took five treaties to end the First World War and it took two with Turkey because Turkey refused to sign the first one.GreggI think this is a fascinating story, too, that you had the collapse of four empires in World War One, right? The Ottoman Empire was just one that collapsed. You had the Ottoman Empire, the Austro-Hungarian, the Hapsburg, and the Prussian empires all collapsed as a result of World War I. And Europe was left trying to sort out what to do with all these lands and their colonies. And it was a huge challenge.ScudieriAnd some of the Allied discussions included Russia, and Russia is now off the table because of the Bolshevik revolution.HostLet’s talk about the third period from the monograph. The war ends in 1918 and the 1919 Paris Conference and Versailles Accords created the mandate system, which required European powers to transition most former colonies and territories of the Ottoman Empire into self-ruled states. How did Gertrude Bell help shape Britain’s vision for transitioning Mesopotamia into the state of Iraq?ScudieriI would suggest that using the term vision might be a bit premature given how quickly events change from trying not to lose the war to figuring out how to win the war and then trying to sort out what the post-war world would look like. But Gertrude Bell is an especially fascinating individual case study because she immersed herself in the culture, in the local conditions, and tried to translate that into the strategic vision for Iraq, which was a very unclear path, in large measure, because of the disagreements between the French and the British, and what that post-war world would look like in the region.GreggI think for me, the thing that was so puzzling about what Gertrude did in this period was, I believe she cared deeply about the people and the region. And you know, she ends up dying in Iraq. She’s buried there to this day. And I believe she cared about the people in the region. However, some of the decisions she made in this period just seem very counterintuitive to me. And the biggest one was creating a Kingdom and putting a foreign individual on the throne as the king. And this was against many Shia leaders wishes. There was an individual named Sayyid Talib (al Naqib). He was deported to Ceylon, which is Sri Lanka today. They got rid of him because he didn’t agree with this decision, and I think, at the end of the day, Gertrude Bell had to weigh, on the one hand, what it meant to be a British national and serve British interests, and, on the other, what was in Iraq’s interest. And I think being a British national was what won in the end.ScudieriAnd for us to understand that I think we should avoid a clear black-and-white dichotomy because it was a lot more complicated than that. And I would return to the post-war competition between Britain and France because that Arab Kingdom was supposed to be in Syria. But the French dug their heels in.GreggThey actually were able to create a kingdom, but it lasted less than a year in Damascus. And then Faisal was deposed by the French and then the British. And it’s, I think, this is a big question of debate, but the British then embraced him to be the king of Iraq.HostWhat were the priorities? What was at stake.GreggSo there’s a big debate on this, too, a big, hot debate on this, that I’ve learned. In the primary source documents, I identified two or three big things at stake. The first is military bases. Britain wanted a seaport, but also wanted air bases. The British Air Force was created in 1918. The first Air Force. They needed a land route in which to get from the Middle East to India, and the bases in Iraq seemed to matter a lot. This came up a lot in discussions. The second thing I would add, and this is the controversial thing, is that I believe oil was a big concern. Britain converted its naval fleet from coal to oil before World War I, and they were coal rich but had no oil. So, the pursuit of oil and securing oil mattered. Everyone was fighting over Mosul because they suspected there was oil. There and that proved to be true. But oil became a major concern. There’s a third argument, which is that markets mattered and being able to have yet more people that could be markets for the British Empire seem to matter. Last, but not least, and I think this is the one piece, hopefully, maybe Jim and I will agree on, is that Britain was an empire and it managed to survive World War I, and it wanted influence in that region. A lot was at stake for Britain, just as an empire, and its ability to wield influence.ScudieriHeather’s made some interesting points there, because those RAF bases are part of having a system that goes hand-in-hand with friendly regimes because the mandate system aren’t going to become long-term colonies. They did understand that at the time. Oil is another interesting point about how priorities change. In 1914, oil wasn’t such a big deal, but the British already did have interest with the Anglo Persian oil company. But war sometimes accelerates change, and the First World War accelerated the importance of oil because the prewar British conversion of the Royal Navy to oil had barely begun . . . about 100 ships, none of the battleships in 1914, are fired on oil in the new class that will come in in 1915 and later will be the first ones that are oil-fired. But the explosion and the demands of oil because of not, just the Royal Navy conversion, but the motorization from horse transport, means oil will have a far more central role in the post war world than it did in the prewar. And even during the war.HostSo let’s Fast forward a little bit. How did it unfold?GreggWell, it didn’t go great. I think it’s fair to say, and, I think for me, this was a very humbling story about you can have good intentions, you can have experts, but this is extremely difficult to do. And obviously, as an American, in the back of my mind is always what happened between 2003 and 2011 and beyond and our efforts to try to stabilize Iraq as part of Operation Iraqi Freedom. But you end up having a major uprising in Iraq that was actually put down by the persisting presence of the Royal Air Force. You have challenges to Faisal’s leadership. You end up, by 1958, the entire royal family is murdered, and Iraq becomes a Republic. You have lingering political instability and ethnic tensions that I think were not a done deal but got exacerbated. By a lot of the decisions made during this period.ScudieriAll of this turmoil is on top of the turmoil going on in the rest of the world. Most people don’t realize how much fighting around the world continued after 1918. There’s still a lot of instability and unreconciled issues around the world. The US has gone largely isolationist. The French, who though determined that they would stay in Syria, if not Lebanon, are really focused on European security because they do not want to allow Germany to rise again. So that’s your primary concern—just trying to contemplate the sheer losses of the war and what came from it. And I’m not sure to what extent they could have forecast in that region, how Arab would be fighting Arab, such as between the Saudis and the kingdoms of Transjordan and/or Iraq.HostWhat are the takeaways? What can we learn from Bell and the British military and grand strategy during this period?GreggI think there’s a lot of really, really valuable lessons here. Some of the positive things . . . I go back to the Arab Bureau; I appreciate that the British military was not afraid to bring in civilians and get a civilian voice. They built a really agile, small, and diverse team. They would bring experts in for certain questions and then send them home and bring other experts in. I think there’s a really interesting story there about team building and problem solving. I think that there are a lot of other very humbling lessons to learn. For me, an eerie similarity to, perhaps what the United States did, was not including the population enough in the stabilization process and in the postwar peace, I think that really undermined British efforts. And needing to work by with them through the population, not just during the war but after is deeply important.ScudieriI would echo Heather’s comments as well as the fact that Gertrude Bell is a fascinating case study in talent management. She had no specialization or training in terms of Mesopotamia, per se. She was brought in as an outsider based on some of her educational background that she might be able to help think through the problem set, and then she winds up becoming a subject matter expert on Iraq.GreggAlthough I would add a little caveat to that, which is that she had traveled through the Middle East in 1911-12 time frame, and she had mapped the human terrain. This is something that we also tried to do in both Iraq and Afghanistan. And so, she had gained attention because she had made this trip. That doesn’t make her an expert, I agree. But she had some on-the-ground knowledge of the population’s tribal dynamics that no one else seemed to have. And then that was a great starting point from which then she built her expertise.ScudieriSo that’s an interesting learning point on how, in the midst of war, you can still pull talent management to try to get the biggest bang for the buck and save some effort.GreggThat’s a great point. I love that.HostAbsolutely. I’m just going to plug the monograph right here. You can download it at press.armywarcollege.edu/monographs. Thank you both so much. What a treat. I’m sorry we had so little time to cover such an expansive and interesting topic.GreggThank you so much for this opportunity. It was, it’s great to be with you both. Thank you, Jim for a wonderful conversation.ScudieriWell, Many thanks for the ability to share this time together.HostIf you enjoyed this episode and would like to hear more, you can find us on. Any major podcast platform.About the authors:Gregg is a professor of irregular warfare at the George C. Marshall European Center for Security and the author of The Grand Strategy of Gertrude Bell: From the Arab Bureau to the Creation of Iraq. Gregg earned a PhD in political science from the Massachusetts Institute of Technology, a master’s degree in Islam from Harvard Divinity School, and a bachelor’s degree (with honors) in cultural anthropology from the University of California at Santa Cruz. She is the author of Religious Terrorism (Cambridge University Press, 2020), “Religiously Motivated Violence” (Oxford University Press, 2018), Building the Nation: Missed Opportunities in Iraq and Afghanistan (University of Nebraska Press, 2018), and The Path to Salvation: Religious Violence from the Crusades to Jihad (University of Nebraska Press, 2014) and coeditor of The Three Circles of War: Understanding the Dynamics of Modern War in Iraq (Potomac Books, 2010).Scudieri is the senior research historian at the Strategic Studies Institute. He’s an associate professor and historian at the US Army War College. He analyzes historical insights for today’s strategic issues. He holds a Bachelor of Arts degree in History from Saint Peter’s College, now University (1978); a Master of Arts degree in History from Hunter College, City University of New York (1980); a Master of Military Art and Science degree from the U.S. Army Command and General Staff College (1995); and a Doctor of Philosophy degree in History from the Graduate School and University Center, City University of New York (1993).

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner