In this episode, Oliver Cronk is joined by colleagues David Rees, Hélène Sauvé, Ivan Mladjenovic and Emma Pearce. Together, they delve into the practical applications and limitations of agentic AI and its implications for enterprise AI deployments.
The team shares insights from the ‘Infer’ research and development projects, through which Scott Logic produced and open-sourced InferLLM (a local, personalised AI agent) and InferESG (which uses AI agents to identify greenwashing in Environmental, Social and Governance reports).
With real-world examples and expert perspectives, the panel provides a nuanced view of whether fully autonomous agents are hype or reality in 2025. They discuss the balance between human oversight and automation, and emphasise the importance of transparency and traceability in AI systems. They also consider the ethical considerations of self-building agents and the challenges of ensuring reliable AI outputs.
Have a listen to gain a deeper understanding of the evolving landscape of agentic AI and its potential impact on various sectors.
Useful links for this episode
- InferLLM on GitHub – Open-sourced by Scott Logic
- InferESG on GitHub – Open-sourced by Scott Logic
- InferESG: Augmenting ESG Analysis with Generative AI – David Rees, Scott Logic
- InferESG: Finding the Right Architecture for AI-Powered ESG Analysis – David Rees, Scott Logic
- InferESG: Harnessing agentic AI for due diligence – Scott Logic case study
- Beyond the Hype: Will we ever be able to secure GenAI? – Scott Logic
- Beyond the Hype: Is architecture for AI even necessary? – Scott Logic
- Draft classification for different types of Enterprise AI deployment – Oliver Cronk, Scott Logic