undefined

Shayan Mohanty

Head of AI Research at Thoughtworks and former CEO and co-founder of Watchful, offering expertise in AI research and development.

Top 3 podcasts with Shayan Mohanty

Ranked by the Snipd community
undefined
61 snips
Feb 6, 2025 • 33min

Decoding DeepSeek

In this insightful discussion, Prasanna Pendse, Global Director of AI Strategy, and Shayan Mohanty, Head of AI Research, share their expertise on the revolutionary AI start-up DeepSeek. They dive into how DeepSeek’s R1 reasoning model differentiates itself from giants like OpenAI. The duo tackles misconceptions about AI training costs, the impact of hardware limitations, and innovative strategies to optimize performance. They also explore the implications of these developments on the tech industry’s economic landscape and the complexities surrounding model licensing.
undefined
14 snips
May 30, 2022 • 51min

The Fallacy of "Ground Truth" with Shayan Mohanty - #576

Today we continue our Data-centric AI series joined by Shayan Mohanty, CEO at Watchful. In our conversation with Shayan, we focus on the data labeling aspect of the machine learning process, and ways that a data-centric approach could add value and reduce cost by multiple orders of magnitude. Shayan helps us define “data-centric”, while discussing the main challenges that organizations face when dealing with labeling, how these problems are currently being solved, and how techniques like active learning and weak supervision could be used to more effectively label. We also explore the idea of machine teaching, which focuses on using techniques that make the model training process more efficient, and what organizations need to be successful when trying to make the aforementioned mindset shift to DCAI. The complete show notes for this episode can be found at twimlai.com/go/576
undefined
11 snips
Jan 23, 2025 • 36min

AI testing, benchmarks and evals

Join Shayan Mohanty, Head of AI Research at Thoughtworks, and John Singleton, Program Manager at the AI Lab, as they dive into the complexities of generative AI. They discuss the vital role of evals, benchmarks, and guardrails in ensuring AI reliability. The duo outlines the differences between testing and evaluations, highlighting their significance for businesses. Additionally, they explore mechanistic interpretability and the need for robust frameworks to enhance trust in AI applications. This conversation is essential for anyone navigating the evolving AI landscape.