LessWrong (Curated & Popular) cover image

'Simulators' by Janus

LessWrong (Curated & Popular)

00:00

GPT-3 Is a Huge Model, Trained on Huge Data for Predicting Text.

A lot of prior work on AI alignment is relevant to GPT. We need something like an ontological adapter pattern that maps them to the appropriate objects. Namelessness can not only be a symptom of the extrapolation of powerful predictors falling through conceptual cracks, but also a cause. What we can represent in words is what we can condition on for further generation. To whatever extent this shapes private thinking, it is a strict constraint on communication when thoughts must be sent through the bottleneck of words.

Play episode from 14:35
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app