ISF Podcast cover image

ISF Podcast

Latest episodes

undefined
May 21, 2024 • 25min

S26 Ep2: Thom Dennis - Becoming a Leader of the Future: Learning to let go and trust your gut

Thom Dennis, an executive coach and CEO, discusses trust, delegation, and remote work challenges in leadership. He emphasizes letting go, setting clear objectives, and predicts a shift towards prioritizing society's demands over corporate standards. Key topics include embracing change, avoiding burnout, and fostering trust and community within organizations.
undefined
May 14, 2024 • 24min

S26 Ep1: Erik Avakian - Fuelling Business Growth with Modern Security Leadership

Today, Steve is speaking with Erik Avakian, who served as CISO for the Commonwealth of Pennsylvania in the United States for more than twelve years before moving into the private sector, where he currently works as the technical counselor at Info-Tech Research Group. Erik brings his passion and experience to a lively conversation in which he and Steve discuss coping with change through multiple leadership turnovers, practical examples of how security leaders can demonstrate their department’s value to an organization beyond theoretical breach prevention, and overcoming challenges in the public and private sectors. Key Takeaways: 1. Embracing change in state/local government requires technical architecture and common architecture. 2. Public sector security faces unique challenges, including political considerations. 3. It’s critical for public funds to be used efficiently while also reducing duplication of work and building knowledge sharing across agencies. 4. Security testing and phishing simulations can demonstrate return on security investment, saving time and money in the long run. Tune in to hear more about: 1. Embracing change in security leadership in the public sector (0:00) 2. Building security foundations in public sector organizations (4:45) 3. Funding challenges in security, with tips for effective resource utilization, building strong teams, and collaboration (8:48) 4. Demonstrating security value to business leaders through cost-benefit analysis and service metrics (14:02) 5. Demonstrating security value to non-technical stakeholders through practical examples (18:33) Standout Quotes: 1. One of the reasons I love the industry and I loved the position of CISO is you're constantly trying to just improve, right? You're not trying to rebuild every, all the time. You know that the business might want to rebuild, but you're there to constantly improve that foundation, continuingly building your team, and continually building your capabilities. So regardless of who comes and goes, you have that foundation, and you continue to grow it. - Erik Avakian 2. It's really about enabling the business. How can we say yes, but do things more securely and put a positive spin on it? Whereas, you know, in the past, you know, security is looked at oh, these are the guys that say no. So really, a CISO's a partner to the business, a collaborator building relationships, and really, that's been the change, right? It's gone from less of a technical kind of a thing to being a coach, being a leader, and really working and building those relationships at the business level. - Erik Avakian 3. I look at it as almost like a baseball team. So in the baseball world, you have a catcher, you have a pitcher, you have all these people on the field. And it's identifying what are the strengths of your team, and letting those players — if we look at it from that perspective — letting them thrive, letting them grow in the position that they're passionate about. And then you can just grow in that passion, give them the training, give them extra training, helping them build where they're really good at and what they really like to do. And then the baseball world is that example. We wouldn't necessarily make the pitcher catch — they might not be comfortable with that — or the catcher pitch, and all sorts of other things. Because they do what they do well, that's their position on the field. And what I've found is that if we can do that, we can build our teams and build rock stars out of them in the places where they really are passionate about, then we have retention. I think my retention throughout my tenure was almost 99%, because I looked at people as to what drives them. - Erik Avakian Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
undefined
Apr 30, 2024 • 23min

S25 Ep5: Boosting Business Success: Unleashing the potential of human and AI collaboration

Today, Steve and producer Tavia Gilbert discuss the impact artificial intelligence is having on the threat landscape and how businesses can leverage this new technology and collaborate with it successfully. Key Takeaways: 1.  AI risk is best presented in business-friendly terms when seeking to engage executives at the board level. 2. Steve Durbin takes the position that AI will not replace leadership roles, as human strengths like emotional intelligence and complex decision making are still essential. 3. AI risk management must be aligned with business objectives while ethical considerations are integrated into AI development. 4. Since AI regulation will be patchy, effective mitigation and security strategies must be built in from the start. Tune in to hear more about: 1. AI’s impact on cybersecurity, including industrialized high-impact attacks and manipulation of data (0:00) 2. AI collaboration with humans, focusing on benefits and risks (4:12) 3. AI adoption in organizations, cybersecurity risks, and board involvement (11:09) 4. AI governance, risk management, and ethics (15:42) Standout Quotes: 1. Cyber leaders have to present security issues in terms that board level executives can understand and act on, and that's certainly the case when it comes to AI. So that means reporting AI risk in financial, economic, operational terms, not just in technical terms. If you report in technical terms, you will lose the room exceptionally quickly. It also involves aligning AI risk management with business needs by you know, identifying how AI risk management and resilience are going to help to meet business objectives. And if you can do that, as opposed to losing the room, you will certainly win the room. -Steve Durbin 2. AI, of course, does provide some solution to that, in that if you can provide it with enough examples of what good looks like and what bad looks like in terms of data integrity, then the systems can, to an extent, differentiate between what is correct and what is incorrect. But the fact remains that data manipulation, changing data, whether that be in software code, whether it be in information that we're storing, all of those things remain a major concern. -Steve Durbin 3. We can’t turn the clock back. So at the ISF, you know, our goal is to try to help organizations figure out how to use this technology wisely. So we're going to be talking about ways humans and AI complement each other, such as collaboration, automation, problem solving, monitoring, oversight, all of those sorts of areas. And I think for these to work, and for us to work effectively with AI, we need to start by recognizing the strengths both we as people and also AI models can bring to the table. -Steve Durbin 4. I also think that boards really need to think through the impact of what they're doing with AI on the workforce, and indeed, on other stakeholders. And last, but certainly not least, what the governance implications of the use of AI might look like. And so therefore, what new policies controls need to be implemented. -Steve Durbin 5. We need to be paying specific attention to things like ethical risk assessment, working to detect and mitigate bias, ensure that there is, of course, informed consent when somebody interacts with AI. And we do need, I think, to be particularly mindful about bias, you know? Bias detection, bias mitigation. Those are fundamental, because we could end up making all sorts of decisions or having the machines make decisions that we didn't really want. So there's always going to be in that area, I think, in particular, a role for human oversight of AI activities. -Steve Durbin Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
undefined
Apr 23, 2024 • 23min

S25 Ep4: Brian Lord - AI, Mis-and Disinformation in Election Fraud and Education

This is the second of a two-part conversation between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. Today, Steve and Brian discuss the proliferation of mis- and disinformation online, the potential security threats posed by AI, and the need for educating children in cyber awareness from a young age. Key Takeaways: 1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI. 2. AI’s increasing ability to create fabricated  images poses a particular threat to youth and other vulnerable users. Tune in to hear more about: 1. Brian gives his assessment of cybersecurity threats during election years. (16:04) 2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0) Standout Quotes: 1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord 2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord Mentioned in this episode: • ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
undefined
Apr 16, 2024 • 17min

S25 Ep3: Brian Lord - Lost in Regulation: Bridging the cyber security gap for SMEs

This episode is the first of two conversations between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. He brings his knowledge of both the public and private sector to bear in this wide-ranging conversation. Steve and Brian touch on the challenges small-midsize enterprises face in implementing cyber defenses, what effective cooperation between government and the private sector looks like, and  the role insurance may play in cybersecurity. Key Takeaways: 1.  A widespread, societal approach involving both the public and private sectors is essential in order to address the increasingly complex risk landscape of cyber attacks. 2. At the public or governmental levels, there is an increasing need to bring affordable cyber security services to small and mid-sized businesses, because failing to do so puts those businesses and major supply chains at risk. 3. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI. Tune in to hear more about: 1. The National Cybersecurity Organization is part of GCHQ, serving to set regulatory standards and safeguards, communicate novel threats, and uphold national security measures in the digital space. (5:42) 2. Steve and Brian discuss existing challenges of small organizations lacking knowledge and expertise to meet cybersecurity regulations, leading to high costs for external advice and testing. (7:40) Standout Quotes: 1. “...If you buy an external expertise — because you have to do, because either you haven’t got the demand to employ your own, or if you did the cost of employment would be very hard — the cost of buying an external advisor becomes very high. And I think the only way that can be addressed without compromising the standards is of course, to make more people develop more skills and more knowledge. And that, in a challenging way, is a long, long term problem. That is the biggest problem we have in the UK at the moment. And actually, in a lot of countries. The cost of implementing cybersecurity can quite often outweigh, as it may be seen within a smaller business context, the benefit.” -Brian Lord 2. “I think there probably needs to be a lot more tangible support, I think, for the small to medium enterprises. But that can only come out of collaboration with the cybersecurity industry and with government about, how do you make sure that some of the fees around that are capped?” -Brian Lord Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
undefined
Apr 9, 2024 • 22min

S25 Ep2: Eric Siegel - The AI Playbook: Leveraging machine learning to grow your business

AI expert Eric Siegel discusses leveraging machine learning in business, focusing on types of AI and quality data inputs. He highlights the importance of precise project scopes, differences between generative and predictive AI, and the potential impact on companies' bottom lines.
undefined
Apr 2, 2024 • 26min

S25 Ep1: Cyber Warfare and Democracy in the Age of Artificial Intelligence

Today, Steve is speaking with Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technologies and Dslt Ethics Fellow at the Alan Turing Institute. Mariarosaria brings her expertise as a philosopher to bear in this discussion of why and how we must develop agreed-upon ethical principles and governance for cyber warfare. Key Takeaways: 1. As cyber attacks increase, international humanitarian law and rules of war require a conceptual shift. 2. To maintain competitive advantage while upholding their values, liberal democracies are needing to move swiftly to develop and integrate regulation of emerging digital technologies and AI. 3. Many new technologies have a direct and harmful impact on the environment, so it’s imperative that any ethical AI be developed sustainably.  Tune in to hear more about: 1.  The digital revolution affects how we do things, how we think about our environment, and how we interact with the environment. (1:10) 2. Regardless of how individual countries may wield new digital capabilities, liberal democracies as such must endeavor tirelessly to develop digital systems and AI that is well considered, that is ethically sound, and that does not discriminate. (5:20) 3. New digital capabilities may produce CO2 and other environmental impacts that will need to be recognized and accounted for as new technologies are being rolled out. (10:03) Standout Quotes: 1.  “The way in which international humanitarian laws works or just war theory works is that we tell you what kind of force, when, and how you can use it to regulate the conduct of states in war. Now, fast forward to 2007, cyber attacks against Estonia, and you have a different kind of war, where you have an aggressive behavior, but we're not using force anymore. How do you regulate this new phenomenon, if so far, we have regulated war by regulating force, but now this new type of war is not a force in itself or does not imply the use of force? So this is a conceptual shift. A concept which is not radically changing, but has acquired or identifies a new phenomenon which is new compared to what we used to do before.” - Mariarosario Taddeo  2. “I joke with my students when they come up with this same objection, I say, well, you know, we didn't stop putting alarms and locking our doors because sooner or later, somebody will break into the house. It's the same principle. The risk is there, it’s present. They’re gonna do things faster in a more dangerous way, but if we give up to the regulations, then we might as well surrender immediately, right?” - Mariarosario Taddeo 3. “LLMs, for example, large language models, ChatGPT for example, they consume a lot of the resources of our environment. We did with some of the students here of AI a few years ago a study where we show that training just one round of ChatGPT-3 would produce as much CO2 as 49 cars in the US for a year. It’s a huge toll on the environment. So ethical AI means also sustainably developed.” - Mariarosario Taddeo Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
undefined
Mar 26, 2024 • 17min

S24 Ep12: Cyber Exercises: Fail to prepare, prepare to fail

A repeat of one of our top episodes from 2023: October is Cyber Awareness Month, and we’re marking the occasion with a series of three episodes featuring Steve in conversation with ISF’s Regional Director for Europe, the Middle East and Africa, Dan Norman. Today, Steve and Dan discuss the importance of cyber resilience and how organisations can prepare for cyber attacks. Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
undefined
13 snips
Mar 19, 2024 • 21min

S24 Ep11: Tali Sharot - Changing Behaviours: Why facts alone don't work

Neuroscientist Tali Sharot discusses optimism bias and its implications on risk assessment. Present bias affects decision-making. Pairing data with anecdotes enhances communication. Explore how emotion influences memory and why storytelling is effective in persuasion.
undefined
Mar 12, 2024 • 20min

S24 Ep10: Nina Schick - The Future of Information Integrity

This week, we’ve got another fascinating conversation recorded at the 2023 ISF Congress in Rotterdam. This time, Steve speaks with generative AI expert Nina Schick. Nina and Steve discuss how AI, along with other technological trends that are evolving at exponential speed, are shaping both geopolitics and individual lives. Key Takeaways: 1. Generative AI is reshaping the geopolitical landscape. 2. Educating ourselves and others about the implications of quickly evolving tech in global affairs. 3. Industries struggling to regulate exponential technology. 4. There are more questions than answers as we look to the future in tech. Tune in to hear more about: 1. AI’s geopolitical impacts (3:13) 2. Learning about how tech is impacting global affairs (9:53) 3. Regulation challenges (11:55) 4. Nina Shick’s take on the economics of generative AI  (16:27) Standout Quotes: 1. “As the oil economies of Saudi Arabia and UAE seek to diversify away from oil and energy, one of the things that they're doing is trying to become very high tech economies when artificial intelligence is absolutely leading the way with these strategies. And there's so much money going to be invested in the Gulf in the coming decade when it comes to artificial intelligence. Again, even though these are relatively small countries, they are perhaps going to punch above their weight when it comes to power that is harnessed by artificial intelligence. And that means in a military sense, in an economic sense, and ultimately, you know, a geopolitical sense.” -Nina Schick 2. “I think the harder thing also are the non technical solutions – you know, education, literacy – how do people get upskilled in terms of understanding the new capabilities of artificial intelligence and how they will be deployed in their respective domains? So I think it's not only that there are technical solutions, there are also societal and learning solutions which perhaps we're going to have to get on top of very, very quickly.” -Nina Schick 3. “Regulators have to work with industry. There's no way they can do this themselves. And already in many of the kind of more promising areas with dealing with some of the challenges, such as information integrity, when you come to questions like provenance, you see industry championing the way and supporting regulators.” -Nina Schick 4. “Will there be economic value associated with AI? I think, absolutely. But the question is, how's that going to be distributed? And is it going to be monopolized? So that's going to happen with regards to the tech giants, who I think will become very, very, very powerful. I think this will continue to be a priority of utmost importance to governments. I think this challenge, or this kind of race between China and the US with regards to artificial intelligence will continue to play out. I think the Middle East is going to become a strong contender. And I suspect Europe might fall behind a little bit … And actually, I think that this technology is also going to be in the hands of millions of people.” -Nina Schick Mentioned in this episode: Threat Horizon 2024: the Disintegration of Trust ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode