Published in 1957, 'Atlas Shrugged' is Ayn Rand's magnum opus and her longest novel. The story is set in a dystopian United States where increasingly burdensome laws and regulations strangle innovation and productivity. The plot follows Dagny Taggart, a railroad executive, and Hank Rearden, a steel magnate, as they struggle against 'looters' who exploit their work. A mysterious figure named John Galt leads a strike of productive individuals, persuading them to abandon their companies and disappear. The novel culminates with Galt's three-hour radio speech explaining his philosophy of Objectivism, which emphasizes rational self-interest, individual rights, and the importance of the human mind. The book explores themes of capitalism, property rights, and the failures of governmental coercion, presenting a provocative vision of a society in collapse and the potential for a new capitalist society based on Galt's principles.
This book provides an in-depth look at the rationalist community, focusing on their concerns about superintelligence and its potential risks to humanity. Through interviews and analysis, Chivers examines the rationalists' perspectives on AI and their efforts to mitigate potential dangers. The book offers a balanced view, exploring both the ideas and criticisms within this community.
In 'On Intelligence,' Jeff Hawkins, with the help of Sandra Blakeslee, outlines his memory-prediction framework theory of the brain. This theory posits that the brain is a hierarchical, predictive system that uses memory to make continuous predictions about future events. Hawkins argues that current approaches to artificial intelligence are flawed because they do not understand the fundamental principles of how the brain works. He explains how the neocortex, the seat of intelligence, operates through a hierarchical structure, making predictions based on associative memory. The book discusses the implications of this theory for neuroscience, the development of intelligent machines, and our understanding of human behavior and cognition.
This week Dr. Tim Scarfe, Alex Stenlake and Yannic Kilcher speak with AGI and AI alignment specialist Connor Leahy a machine learning engineer from Aleph Alpha and founder of EleutherAI.
Connor believes that AI alignment is philosophy with a deadline and that we are on the precipice, the stakes are astronomical. AI is important, and it will go wrong by default. Connor thinks that the singularity or intelligence explosion is near. Connor says that AGI is like climate change but worse, even harder problems, even shorter deadline and even worse consequences for the future. These problems are hard, and nobody knows what to do about them.
00:00:00 Introduction to AI alignment and AGI fire alarm
00:15:16 Main Show Intro
00:18:38 Different schools of thought on AI safety
00:24:03 What is intelligence?
00:25:48 AI Alignment
00:27:39 Humans dont have a coherent utility function
00:28:13 Newcomb's paradox and advanced decision problems
00:34:01 Incentives and behavioural economics
00:37:19 Prisoner's dilemma
00:40:24 Ayn Rand and game theory in politics and business
00:44:04 Instrumental convergence and orthogonality thesis
00:46:14 Utility functions and the Stop button problem
00:55:24 AI corrigibality - self alignment
00:56:16 Decision theory and stability / wireheading / robust delegation
00:59:30 Stop button problem
01:00:40 Making the world a better place
01:03:43 Is intelligence a search problem?
01:04:39 Mesa optimisation / humans are misaligned AI
01:06:04 Inner vs outer alignment / faulty reward functions
01:07:31 Large corporations are intelligent and have no stop function
01:10:21 Dutch booking / what is rationality / decision theory
01:16:32 Understanding very powerful AIs
01:18:03 Kolmogorov complexity
01:19:52 GPT-3 - is it intelligent, are humans even intelligent?
01:28:40 Scaling hypothesis
01:29:30 Connor thought DL was dead in 2017
01:37:54 Why is GPT-3 as intelligent as a human
01:44:43 Jeff Hawkins on intelligence as compression and the great lookup table
01:50:28 AI ethics related to AI alignment?
01:53:26 Interpretability
01:56:27 Regulation
01:57:54 Intelligence explosion
Discord: https://discord.com/invite/vtRgjbM
EleutherAI: https://www.eleuther.ai
Twitter: https://twitter.com/npcollapse
LinkedIn: https://www.linkedin.com/in/connor-j-leahy/