It's been a year since ChatGPT burst onto the scene. It has given many of us a sense of the power and potential that LLMs hold in revolutionizing the global economy. But the power that generative AI brings also comes with inherent risks that need to be mitigated.
For those working in AI, the task at hand is monumental: to chart a safe and ethical course for the deployment and use of artificial intelligence. This isn't just a challenge; it's potentially one of the most important collective efforts of this decade. The stakes are high, involving not just technical and business considerations, but ethical and societal ones as well.
How do we ensure that AI systems are designed responsibly? How do we mitigate risks such as bias, privacy violations, and the potential for misuse? How do we assemble the right multidisciplinary mindset and expertise for addressing AI safety?
Reid Blackman, Ph.D., is the author of “Ethical Machines” (Harvard Business Review Press), creator and host of the podcast “Ethical Machines,” and Founder and CEO of Virtue, a digital ethical risk consultancy. He is also an advisor to the Canadian government on their federal AI regulations, was a founding member of EY’s AI Advisory Board, and a Senior Advisor to the Deloitte AI Institute. His work, which includes advising and speaking to organizations including AWS, US Bank, the FBI, NASA, and the World Economic Forum, has been profiled by The Wall Street Journal, the BBC, and Forbes. His written work appears in The Harvard Business Review and The New York Times. Prior to founding Virtue, Reid was a professor of philosophy at Colgate University and UNC-Chapel Hill.
In the episode, Reid and Richie discuss the dominant concerns in AI ethics, from biased AI and privacy violations to the challenges introduced by generative AI, such as manipulative agents and IP issues. They delve into the existential threats posed by AI, including shifts in the job market and disinformation. Reid also shares examples where unethical AI has led to AI projects being scrapped, the difficulty in mitigating bias, preemptive measures for ethical AI and much more.
Links mentioned in the show: