

AI and Safety: How Responsible Tech Leaders Build Trustworthy Systems (National Safety Month Special)
Jun 26, 2025
Silvio Savarese, Executive Vice President and Chief Scientist at Salesforce, shares insights into building trustworthy AI systems. They discuss designing AI with safety and human oversight to protect users. The conversation dives into the critical need for data privacy and bias mitigation in predictive models. There’s a focus on balancing the speed of innovation with long-term impacts on trust and compliance. The episode also emphasizes the necessity of transparency in navigating AI ethics, particularly in high-stakes decision-making.
AI Snips
Chapters
Books
Transcript
Episode notes
Key Principles of Responsible AI
- Responsible AI means building AI that is safe, accurate, and compliant for users.
- Key principles include accuracy, safety from bias and toxicity, privacy, transparency, empowerment, and sustainability.
Enterprise AI Requires Specialized Models
- Enterprise AI differs from consumer AI by focusing on specialized models for specific domains with controlled data.
- This approach reduces hallucinations and biases, preserving privacy by never training on customer data.
AI Values Data Sensitivity Dynamically
- AI scans data wherever it lives without proxies or agents to prevent bypass.
- It classifies data by type and estimates its dollar value based on market factors like the dark web.