Christian Focacci, founder and CEO of Threat.Digital, dives into the evolving role of AI in corporate risk management. He discusses whether AI is a magical solution or merely a tool, emphasizing the necessity of human judgment in AI applications. Focacci highlights misunderstandings about large language models and the critical need for corporate AI governance. The conversation explores how AI improves due diligence processes while addressing challenges like data privacy and the balance between innovation and accountability in compliance.
34:40
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
volunteer_activism ADVICE
Use AI Only When Necessary
Don't use AI unless it is the right tool for the job. Always ensure AI results are explainable and verifiable with human decision-making in the loop.
insights INSIGHT
Static Nature of Large Language Models
Large language models (LLMs) are static machine learning models trained on vast amounts of text data. They do not learn from interactions after training; their parameters remain fixed.
insights INSIGHT
AI Governance Causes Delays
Many companies impose extensive AI vendor governance that prolongs sales cycles. These controls often treat AI as wholly different, causing delays disproportionate to actual risks.
Get the Snipd Podcast app to discover more snips from this episode
Is AI a magic bullet - or just another tool in the compliance toolkit?
What really happens when you let algorithms near your risk decisions?
In this episode of Corruption, Crime and Compliance, Christian Focacci, founder and CEO of Threat.Digital, returns for a thoughtful and highly practical conversation about the state of artificial intelligence in compliance and third-party risk management. Christian’s platform is at the forefront of using large language models and real-time data to transform how companies identify and manage risk - without losing sight of the human judgment that still needs to guide every decision. He and Michael explore what's changed in the AI landscape over the past year, what’s misunderstood about the technology, and how compliance teams can strike the right balance between innovation and accountability.
You’ll hear them discuss:
Why Christian believes you shouldn’t use AI unless it’s truly the right tool for the job, and how this philosophy shapes how Threat.Digital builds and deploys its systems
What large language models actually are, how they function under the hood, and why most people fundamentally misunderstand how they learn and process information
The growing demand for corporate AI governance, how some risk committees are creating unnecessary delays, and why many internal processes are still focused on the wrong questions
How Threat.Digital uses AI to reduce noise in due diligence, replacing bloated, unfiltered search results with clear, high-quality summaries supported by verifiable sources
Why the real power of AI isn’t about replacing humans, but about expanding what can be reviewed - moving from 10 data points to 10,000, while helping compliance professionals focus only on what matters
The future of due diligence: chaining AI tasks to build multi-layered investigations that trace ownership, pull third-party records, and surface hidden risks in real time
How AI is revolutionizing name screening and sanctions checks by eliminating irrelevant fuzzy matches, freeing teams from chasing meaningless alerts and allowing them to act on true risks with confidence