Many people talk about the meaning crisis as one of the afflictions of the status quo. If you don't have any deep meaning in your life or any meaning at all, then you're more susceptible to being manipulated by advertising. There's still lots of people that believe in the two world model probably a majority in the United States. But amongst advanced thinkers, it's probably a much lower number.
Read the full transcript here.
What are large language models (LLMs) actually doing when they churn out text? Are they sentient? Is scale the only difference among the various GPT models? Google has seemingly been the clear frontrunner in the AI space for many years; so how did they fail to win the race to LLMs? And why are other competing companies having such a hard time catching their LLM tech up to OpenAI's? What are the implications of open-sourcing LLM code, models, and corpora? How concerned should we be about bad actors using open source LLM tools? What are some possible strategies for combating the coming onslaught of AI-generated spam and misinformation? What are the main categories of risks associated with AIs? What is "deep" peace? What is "the meaning crisis"?
Jim Rutt is the host of the Jim Rutt Show podcast, past president and co-founder of the MIT Free Speech Alliance, executive producer of the film "An Initiation to Game~B", and the creator of Network Wars, the popular mobile game. Previously he has been chairman of the Santa Fe Institute, CEO of Network Solutions, CTO of Thomson-Reuters, and chairman of the computer chip design software company Analog Design Automation, among various business and not-for-profit roles. He is working on a book about Game B and having a great time exploring the profits and perils of the Large Language Models.
Staff
Music
Affiliates