

An interview with an A.I. (with GPT-3 and Jeremy Nixon)
Sep 30, 2021
Jeremy Nixon, an AI researcher, joins the conversation with GPT-3, OpenAI's groundbreaking language model. They dive into the inner workings of neural networks and the evolution of machine learning, breaking down complex concepts like transformers and few-shot learning. The discussion also addresses the ethical implications of using AI in creative fields, including copyright concerns and job displacement. With a humorous twist, they explore what it means for both humans and machines to possess intelligence, questioning whether true intelligence requires free will.
AI Snips
Chapters
Books
Transcript
Episode notes
GPT-3's Core Function
- GPT-3's core function is predicting statistically likely text based on prior input.
- This allows it to perform various tasks like poetry generation or Q&A by setting up appropriate prompts.
Prompting GPT-3 for Poetry
- Spencer explains how to prompt GPT-3 for specific outputs like poetry.
- By giving it a starting point of poetry, the model continues in a poetic style.
GPT-3 Attempts and Fairness
- GPT-3's impressive outputs can be misleading without knowing the number of attempts.
- Spencer limited GPT-3 to two responses per prompt for a fairer evaluation.