Models can learn spurious correlations instead of the intended classifications, as demonstrated by their misidentification of water and land birds based on the presence of water. This happens because neural networks often adopt superficial strategies to achieve their goals, leading them to focus on easier, surface-level cues. To counteract this tendency, it's crucial to ensure that datasets are genuinely representative of the tasks at hand and to implement regularization techniques that prevent reliance on common shortcuts. Additionally, the lack of inherent knowledge in models presents a fundamental challenge in modeling, underscoring the importance of refining training methods to enhance model robustness.
A major challenge in applied AI is out-of-distribution detection, or OOD, which is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as “unseen” data, as the model has not encountered it during training.
Bayan Bruss is the VP of AI Foundations at Capital One and in this role he works with academic researchers to translate the latest research to address fundamental problems in financial services. Bayan joins the show with Sean Falconer to talk about OOD, the importance of bringing AI research to real world applications, and more.
Full Disclosure: This episode is sponsored by Capital One
Sean’s been an academic, startup founder, and Googler. He has published works covering a wide range of topics from information visualization to quantum computing. Currently, Sean is Head of Marketing and Developer Relations at Skyflow and host of the podcast Partially Redacted, a podcast about privacy and security engineering. You can connect with Sean on Twitter @seanfalconer.
The post AI Research at Capital One with Bayan Bruss appeared first on Software Engineering Daily.