The Aboard Podcast

Arushi Saxena: Can We Trust AI?

Nov 11, 2025
Arushi Saxena, a trust and safety expert with experience in big tech and the public sector, joins the conversation to unpack AI safety. She warns against entering personal information into LLMs due to risks of leaks and misuse. The discussion includes why LLMs can provide incorrect information and the implications of uploading sensitive documents. Arushi also talks about the necessity of trust in AI, the importance of age restrictions for users, and best practices companies should adopt to ensure user safety and data control.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Don't Share Sensitive PII With LLMs

  • Avoid inputting personally identifiable information into LLMs unless absolutely necessary.
  • Treat anything you wouldn't want leaked as off-limits to chat sessions.
ANECDOTE

Geometry Chat Gone Wrong

  • Paul used ChatGPT to help his daughter with geometry and the model confidently gave a wrong answer with a full proof.
  • The episode showed how kids expect technological certainty and get confused by confident but incorrect AI output.
INSIGHT

Over-Reliance Trumps Technical Faults

  • Hallucinations and data leaks are being actively researched and improving but remain unsolved.
  • The human risk is over-reliance: people start trusting outputs too much and stop questioning them.
Get the Snipd Podcast app to discover more snips from this episode
Get the app