These days I do find myself using a chat GPD for informational searches, but I find it's most useful either when I can quickly verify the answer. The next model will be really interesting is when the models themselves have sufficient internal attribution capability to provide reasonable links to where the stuff's coming from. And we believe that's coming when don't know. If you only listen to our podcast, you're missing out on a lot of our content. To sign up for the One Helpful Idea newsletter and start receiving bite-sized ideas once a week, visit clearerthinkingpodcast.com slash newsletter.
Read the full transcript here.
What are large language models (LLMs) actually doing when they churn out text? Are they sentient? Is scale the only difference among the various GPT models? Google has seemingly been the clear frontrunner in the AI space for many years; so how did they fail to win the race to LLMs? And why are other competing companies having such a hard time catching their LLM tech up to OpenAI's? What are the implications of open-sourcing LLM code, models, and corpora? How concerned should we be about bad actors using open source LLM tools? What are some possible strategies for combating the coming onslaught of AI-generated spam and misinformation? What are the main categories of risks associated with AIs? What is "deep" peace? What is "the meaning crisis"?
Jim Rutt is the host of the Jim Rutt Show podcast, past president and co-founder of the MIT Free Speech Alliance, executive producer of the film "An Initiation to Game~B", and the creator of Network Wars, the popular mobile game. Previously he has been chairman of the Santa Fe Institute, CEO of Network Solutions, CTO of Thomson-Reuters, and chairman of the computer chip design software company Analog Design Automation, among various business and not-for-profit roles. He is working on a book about Game B and having a great time exploring the profits and perils of the Large Language Models.
Staff
Music
Affiliates