Send Everyday AI and Jordan a text message
Why are AI models so biased? Whether it's ChatGPT or an AI image generator, LLMs often have certain biases and tendencies. Nick Schmidt, Founder & CTO of SolasAI & BLDS, LLC, joins us to discuss how to understand and fix biased AI.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Nick and Jordan questions about AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: info@youreverydayai.com
Connect with Jordan on LinkedIn
Timestamps:
[00:01:20] Daily AI news
[00:04:00] About Nick and Solas AI
[00:07:14] Algorithm misuse can lead to discrimination
[00:11:54] 3-step burden shifting process to address discrimination
[00:14:18] Internet usage leads to biased data collection
[00:17:30] AI bias, accessibility, and user control insights
[00:22:59] Algorithm fairness through regulations
[00:26:16] Algorithmic decisioning and human biases
[00:27:32] How to address biases in AI models?
Topics Covered in This Episode:
1. Prevalence of Bias in AI Models
2. Detection and Mitigation of Bias in Algorithms
3. Practical Solutions for Addressing Bias in AI
Keywords:
AI bias, discrimination, image generators, language models, input data, burden shifting process, biased information, societal biases, fairness, exclusion, collective punishment, biased AI, practical advice, best practices, everyday users, legal framework, AI news, smart devices, NVIDIA, animated films, detection, mitigation, discriminatory outcomes, generative AI, model development, algorithmic decision-making, dynamic models, reinforcement, algorithmic fairness, Solas AI, newsletter, daily AI
Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/