As we make these systems smarter and bigger, they seem to actually get easier to align. One thing that I found really entertaining is looking at the different ways that people get the systems to behave badly. It's just sort of in endless creativity getting around these restraints. But I guess your view it sounds like is that these exploits will actually get harder as the systems get more intelligent.
Read the full transcript here.
Many people who work on AI safety advocate for slowing the rate of development; but might there be any advantages in speeding up AI development? Which fields are likely to be impacted the most (or the least) by AI? As AIs begin to displace workers, how can workers make themselves more valuable? How likely is it that AI assistants will become better at defending against users who are actively trying to circumvent assistants' guardrails? What effects would the open-sourcing of AI code, models, or training data likely have? How do actual or potential AI intelligence levels affect AI safety calculus? Are there any good solutions to the problem that only ethically-minded people are likely to apply caution and restraint in AI development? What will a world with human-level AGI look like?
An accomplished entrepreneur, executive, and investor, Reid Hoffman has played an integral role in building many of today's leading consumer technology businesses including as the co-founder of LinkedIn. He is the host of the podcasts Masters of Scale and Possible. He is the co-author of five best-selling books: The Startup of You, The Alliance, Blitzscaling, Masters of Scale, and Impromptu.
Note from Reid: Possible [the podcast] is back this summer with a three-part miniseries called "AI and The Personal," which launches on June 21. Featured guests use AI, hardware, software, and their own creativity to better people's daily lives. Subscribe here to get the series: https://link.chtbl.com/thepossiblepodcast
Staff
Music
Affiliates