AI is reaching the point where there is wild speculation, potential regulations, and opinions on if it will destroy humanity. So how does anyone make sense of all of it?
SHOW: 770
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
CHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:
SHOW NOTES:
ARE WE UNDER OR OVER REACTING TO THE AI POSSIBILITIES?
- Oppenheimer said the possibility of destruction was “near zero”
- OpenAI research said it’s 10-20% chance of human destruction
WHAT ARE THE OPEN, REGULATIVE AND STRUCTURAL GUARDRAILS OF AI
- What is good or bad with AI?
- Should societal concerns be considered? By whom?
- Should environmental concerns be considered? By whom?
FEEDBACK?