Trust is the foundation of any relationship, whether it's between friends or in business. But what happens when the entity you're asked to trust isn't human, but AI? How do you ensure that the AI systems you're developing are not only effective but also trustworthy? In a world where AI is increasingly making decisions that impact our lives, how can we distinguish between systems that genuinely serve our interests and those that might exploit our data?
Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, A Hacker’s Mind—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
In the episode, Richie and Bruce explore the definition of trust, the difference between trust and trustworthiness, how AI mimics social trust, AI and deception, the need for public non-profit AI to counterbalance corporate AI, monopolies in tech, understanding the application and potential consequences of AI misuse, AI regulation, the positive potential of AI, why AI is a political issue and much more.
Links Mentioned in the Show:
New to DataCamp?