LessWrong (Curated & Popular) cover image

'Simulators' by Janus

LessWrong (Curated & Popular)

00:00

Getting the Truth From a Finitely Powerful GPT

The supervised mindset causes capabilities researchers to focus on closed form tasks, rather than GPT's ability to simulate open-ended indefinitely long processes. Let's see how the Oracle mindset causes a blind spot of the same shape in the imagination of a hypothetical alignment researcher. Thinking of GPT as an Oracle brings strategies to mind like asking GPT-N to predict a solution to alignment from 2000 years in the future,. As a linked way, less wrong article. It is probably not the best approach for a finitely powerful GPT. The process of generating a solution in the order and resolution that would appear in a future article is probably far from the optimal multi-step algorithm for computing the answer to

Play episode from 52:28
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app