Philosophical Disquisitions cover image

Philosophical Disquisitions

103 - GPT: How worried should we be?

Mar 23, 2023
Olle Häggström, a professor of mathematical statistics, discusses GPT, its intelligent nature, risks, and the reckless development of this technology. They explore the lack of transparency in GPT models and touch on the potential harm and safety concerns. The podcast also delves into the appropriate pace of AI development, the parallel between nuclear weapons and AI, and concerns about the timeline and readiness for AI development.
00:00

Podcast summary created with Snipd AI

Quick takeaways

  • GPT and similar large language models are becoming increasingly human-like and powerful in generating coherent text, but they still have limitations and can make mistakes.
  • The AI alignment challenge refers to the task of aligning the goals and drives of artificial intelligence with human values, but the current approach of scaling up models and relying on trial and error for safety training raises concerns.

Deep dives

GPT and LLMs: An Overview

GPT is a large language model based on a neural network that suggests plausible continuations of given texts. It has been developed over multiple iterations, with GPT-4 being the latest release. These models have become increasingly powerful and human-like in generating coherent and plausible text. However, they still have limitations and can make mistakes, as seen in examples of silly or incorrect answers. The models lack a deep understanding of the content and can be manipulated in certain situations. There are various other large language models developed by different companies, but GPT is currently the most popular.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner