1 Comment

> Admittedly I’m uncertain whether the period length of the prediction is relevant to its legitimacy. Naively, the longer out we are predicting the more room there is for future knowledge to prove the prediction wrong. For instance, it seems unlikely we will cool the planet in 5 years. In 30 years, who knows. Hopefully.

We will not *cool* the planet in 30 years. Excellent would be to stop emissions first. Then, soaking the existing excess carbon out of the atmosphere in 30 years is still much less probable. But even then the Earth wouldn't be cooled to pre-industrial levels, because of the inertia of the ocean temperature and the reduced albedo due to smaller snow/ice cover.

However, the main point I want to make regarding this passage is that it seems that in fact you do engage in Bayesianism when you predict that carbon emissions won't be solved in five years. You judge that it would likely require not a single invention, but a series of inventions: in fusion energy generation, in energy storage, in robot intelligence to staff the factories for producing energy storage capacity (there is a huge shortage of skilled human workforce in this area, and humans are slow to learn things), in rapid factory design and construction, rapid mining ramp up. Realistically (and there is a Bayesian argument behind this word, too, but I won't unpack it here), most of this is impossible without AGI, so you would need to implicitly condition on the invention of AGI, too.

So I don't see any extra content in this Deutsch's argument on top of his disagreement with Bayesianism, which you covered before.

I agree that Bayesianism doesn't look like a principled approach to predicting the future. However, is there any approach which gives better practical results? We know that Bayesianism does lead to better than chance results even when applied to future predictions. And we need to make decisions and prioritise efforts today, based on *some* estimates of relative probabilities of future events.

Expand full comment