Explore the pitfalls of making predictions based on past misjudgments, illustrated by a doctor's warnings about experimental drugs. Dive into the escalating conflict in Ukraine and the complications of supplying advanced weapons. Unpack the interplay between cognitive decline in leaders and the safety concerns surrounding AI, stressing vigilance against complacency. Finally, learn about risk assessment in politics, emphasizing the need for adapting predictions with emerging evidence while navigating trust in expert advisories.
Repeated failed warnings can dangerously undermine genuine caution, leading to potentially catastrophic outcomes when thresholds are eventually crossed.
In geopolitical situations, ignoring escalating risks under the false security of past outcomes may result in severe miscalculations and consequences.
Deep dives
The Fallacy of Predictive Dismissal
Repeated failed predictions can create a false sense of security that ultimately undermines genuine caution. For instance, if a doctor warns a patient against escalating a drug dosage due to safety concerns but the patient continually increases it without apparent harm, the doctor’s credibility is eroded. The patient may then completely disregard future warnings, leading to potentially lethal consequences when a threshold is eventually crossed. Therefore, failing to heed caution because prior warnings were proven wrong overlooks the principle that certain risks become significant only after numerous exposures or escalations.
The Complexity of Risk Assessment in Escalation
In geopolitical contexts, the assessment of risks associated with military escalations, such as the Ukraine war, can be significantly nuanced. Observations indicate that just because escalatory actions have not led to catastrophic consequences thus far, it does not mean that they will continue to be safe indefinitely. Moreover, grappling with the probabilities of potential outcomes becomes even more complicated when there is uncertainty surrounding the decision-maker’s threshold for escalation. Caution should remain a priority, as dismissing concerns without considering the intricate dynamics could result in serious miscalculations.
Dangers of Underestimating Cumulative Risks
The discussion around AI safety illustrates the dangers of underestimating cumulative risks over time. As technology rapidly evolves, the assumption that previous assurances of safety translate to ongoing security can be misleading. Each unsafe instance in a sequence does not imply the absence of future risks; instead, prior reassurances may contribute to a dangerous complacency towards genuine threats. Approaching complex technological advancements with careful scrutiny is essential, as dismissing caution based on past outcomes can lead to overlooking latent dangers that accumulate with further developments.
Suppose something important will happen at a certain unknown point. As someone approaches that point, you might be tempted to warn that the thing will happen. If you’re being appropriately cautious, you’ll warn about it before it happens. Then your warning will be wrong. As things continue to progress, you may continue your warnings, and you’ll be wrong each time. Then people will laugh at you and dismiss your predictions, since you were always wrong before. Then the thing will happen and they’ll be unprepared.
Toy example: suppose you’re a doctor. Your patient wants to try a new experimental drug, 100 mg. You say “Don’t do it, we don’t know if it’s safe”. They do it anyway and it’s fine. You say “I guess 100 mg was safe, but don’t go above that.” They try 250 mg and it’s fine. You say “I guess 250 mg was safe, but don’t go above that.” They try 500 mg and it’s fine. You say “I guess 500 mg was safe, but don’t go above that.”
They say “Haha, as if I would listen to you! First you said it might not be safe at all, but you were wrong. Then you said it might not be safe at 250 mg, but you were wrong. Then you said it might not be safe at 500 mg, but you were wrong. At this point I know you’re a fraud! Stop lecturing me!” Then they try 1000 mg and they die.
The lesson is: “maybe this thing that will happen eventually will happen now” doesn’t count as a failed prediction.