
The Sandra Kublik Podcast
Steering the LLM Safety With Seraphina Goldfarb-Tarrant
In this episode, I explore the depths of AI safety and bias with Seraphina Goldfarb-Tarrant, Head of AI Safety at Cohere. We discuss mitigating AI risks, the impact of large language models on society, and the future of responsible AI development. Seraphina shares insights on participatory design, combating misinformation, and proactive steps toward safety for businesses deploying LLMs.
We also cover Seraphina's academic journey, her transition from neuroscience to studying ancient civilizations, becoming a professional sailor, her connection with nature and Zen practice, and her perception of language, mind, and AI.
Seraphina's Enterprise Guide on AI Safety: https://txt.cohere.com/the-enterprise-guide-to-ai-safety/
Watch the episodes on YouTube: https://www.youtube.com/playlist?list=PLhBwB1lBTwJWR19WkJw87EMh1JU9O7Uff I'm a co-author of a manual on GPT-3 and OpenAI API (Packt, 2023): https://www.packtpub.com/product/gpt-3/9781805125228 You can keep in touch with me via socials: IG, TikTok, X, LinkedIn @itsSandraKublik Sign up for my Substack newsletter for the personal scoop on the topics covered today: https://substack.com/@itssandrakublik Website: www.sandrakublik.com For brand collabs, reach out to me at team@sandrakublik.comMusic by: Acid Jazz by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/