
LessWrong (30+ Karma) “Cognitive Tech from Algorithmic Information Theory” by Cole Wyeth
Epistemic status: Compressed aphorisms.
This post contains no algorithmic information theory (AIT) exposition, only the rationality lessons that I (think I've) learned from studying AIT / AIXI for the last few years. Many of these are not direct translations of AIT theorems, but rather frames suggested by AIT. In some cases, they even fall outside of the subject entirely (particularly when the crisp perspective of AIT allows me to see the essentials of related areas).
Prequential Problem. The posterior predictive distribution screens off the posterior for sequence prediction, therefore it is easier to build a strong predictive model than to understand its ontology.
Reward Hypothesis (or Curse). Simple first-person objectives incentivize sophisticated but not-necessarily-intended intelligent behavior, therefore it is easier to build an agent than it is to align one.
Coding Theorem. A multiplicity of good explanations implies a better (ensemble) explanation.
Gacs' Separation. Prediction is close but not identical to compression.
Limit Computability. Algorithms for intelligence can always be improved.
Lower Semicomputability of M. Thinking longer should make you less surprised.
Chaitin's Number of Wisdom. Knowledge looks like noise from outside.
Dovetailing. Every meta-cognition enthusiast reinvents Levin/Hutter search, usually with added epicycles.
Grain of Uncertainty [...]
---
First published:
December 11th, 2025
Source:
https://www.lesswrong.com/posts/geu5GAbJyXqDShT9P/cognitive-tech-from-algorithmic-information-theory
---
Narrated by TYPE III AUDIO.
