
PurePerformance How to test, optimize, and reduce hallucinations of AIs with Thomas Natschlaeger
Oct 13, 2025
Thomas Natschlaeger, Principal Data Scientist at Dynatrace, brings nearly 30 years of AI experience to the table. He shares fascinating insights into the historical milestones that made AI mainstream, emphasizing the importance of hardware and recent breakthroughs. The discussion dives into how LLMs can validate outputs, reducing hallucinations while boosting accuracy. Thomas also explores the trend towards specialized agents and the rethinking of roles within AI-native systems, making for a thought-provoking conversation on the future of technology.
AI Snips
Chapters
Transcript
Episode notes
Three Decades Building Neural Nets
- Thomas has been building neural networks since the early days and wrote his first in C about three decades ago.
- His neuroscience background helped form hypotheses and testing approaches for modern networks.
Why AI Suddenly Took Off
- Major AI leaps required hardware, algorithms, and community maturity coming together over decades.
- The 2018 "Attention Is All You Need" transformer paper triggered rapid practical adoption and APIs.
Different Tests For Different Outputs
- Chatbots and code/query generation are different problems with different testing needs.
- Test the entire retrieval-augmented pipeline, not just the LLM, using integration and unit tests.
