We’ve never been more aware of the word ‘hallucinate’ in a professional setting. Generative AI has taught us that we need to work in tandem with personal AI tools when we want accurate and reliable information. We’ve also seen the impacts of bias in AI systems, and why trusting outputs at face value can be a dangerous game, even for the largest tech organizations in the world. It seems we could be both very close and very far away from being able to fully trust AI in a work setting. To really find out what trustworthy AI is, and what causes us to lose trust in an AI system, we need to hear from someone who’s been at the forefront of the policy and tech around the issue.
Alexandra Ebert is an expert in data privacy and responsible AI. She works on public policy issues in the emerging field of synthetic data and ethical AI. Alexandra is on Forbes ‘30 Under 30’ list and has an upcoming course on DataCamp! In addition to her role as Chief Trust Officer at MOSTLY AI, Alexandra is the chair of the IEEE Synthetic Data IC expert group and the host of the Data Democratization podcast.
In the episode, Richie and Alexandra explore the importance of trust in AI, what causes us to lose trust in AI systems and the impacts of a lack of trust, AI regulation and adoption, AI decision accuracy and fairness, privacy concerns in AI, handling sensitive data in AI systems, the benefits of synthetic data, explainability and transparency in AI, skills for using AI in a trustworthy fashion and much more.
Links Mentioned in the Show: