Spencer Greenberg speaks with Jim Rutt about the power, progress, and applications of large language models. Large language models are one of the branches on the trees of the new deep learning neural net models. They use so-called transformer technologies to look for essentially short-range and long-range correlations between words.
Read the full transcript here.
What are large language models (LLMs) actually doing when they churn out text? Are they sentient? Is scale the only difference among the various GPT models? Google has seemingly been the clear frontrunner in the AI space for many years; so how did they fail to win the race to LLMs? And why are other competing companies having such a hard time catching their LLM tech up to OpenAI's? What are the implications of open-sourcing LLM code, models, and corpora? How concerned should we be about bad actors using open source LLM tools? What are some possible strategies for combating the coming onslaught of AI-generated spam and misinformation? What are the main categories of risks associated with AIs? What is "deep" peace? What is "the meaning crisis"?
Jim Rutt is the host of the Jim Rutt Show podcast, past president and co-founder of the MIT Free Speech Alliance, executive producer of the film "An Initiation to Game~B", and the creator of Network Wars, the popular mobile game. Previously he has been chairman of the Santa Fe Institute, CEO of Network Solutions, CTO of Thomson-Reuters, and chairman of the computer chip design software company Analog Design Automation, among various business and not-for-profit roles. He is working on a book about Game B and having a great time exploring the profits and perils of the Large Language Models.
Staff
Music
Affiliates