The worldview of David Deutsch is a view quake. Especially for a young rationalist.
As I wrote in Deutsch Eats Rationalism, his ideas are difficult to reconcile with core tenets of rationalism. I’ve attempted to organize some of these conflicts and list out Deutsch’s different influences in causing them.
One example of the tension is his claim that the future is unknowable1.
How is this part of a view quake? Isn’t this as trivial as it gets? Everyone knows the future is uncertain. It sits alongside claims that the air is good for breathing, ice cream is tasty, and Beyonce had one of the best videos of all time.
And aren't rationalists the people that are least likely to underrate the uncertainty of the future. We rationalists know about the gambler’s fallacy, outcome bias, variance, and expected value. We calculate confidence intervals, estimate ranges, and append our models with sensitivity analyses2.
In fact, it’s part of what it means to be a rationalist: accounting for uncertainty.
Deutsch's claim isn’t about variance: The future is not only uncertain, it is inherently unknowable. Any claims about the future with regards to human activity, including probabilistic ones, are akin to prophecy.
That’s a pretty strong position, so let’s unpack it.
The argument
Deutsch’s argument is short and easy to understand:
What happens to future civilization is influenced by what knowledge gets discovered.
In the present, we can’t know what future knowledge will be discovered.
Therefore, we can't know what will happen to future civilization.
That's it. That's the whole argument3. Brainy people are making thousands of predictions around the world with complicated math as we speak. Are they partaking in a valid enterprise or merely producing prophecies?
Deutsch surely isn’t arguing against all predictions. That would be absurd. He’d be arguing against science.
So what is Deutsch against exactly? Let’s look at the two premises in reverse order.
The second premise is sort of true by definition. Undiscovered knowledge must be unknown otherwise it would be discovered knowledge. We can’t have tomorrow’s knowledge today.
The philosopher Alasdair MacIntyre explains this point in After Virtue:
Some time in the Old Stone Age you and I are discussing the future and I predict that within the next ten years someone will invent the wheel. ‘Wheel?’ you ask. ‘What is that?”’ I then describe the wheel to you, finding words, doubtless with difficulty, for the very first time to say what a rim, spokes, a hub and perhaps an axle will be. Then I pause, aghast. But no one can be going to invent the wheel, for I have just invented it.”
In other words, the invention of the wheel cannot be predicted. For a necessary part of predicting an invention is to say what a wheel IS; and to say what a wheel IS just IS to invent it. It is easy to see how this example can be generalized. Any invention, any discovery, which consists essentially in the elaboration of a radically new concept cannot be predicted, for a necessary part of the prediction is the present elaboration of the very concept whose discovery or invention was to take place only in the future. The notion of the prediction of radical conceptual innovation is itself conceptually incoherent.
We cannot predict future knowledge. Inherently.
Now let’s look at the first premise: The discovery of knowledge influences the future. Specifically, it can influence outcomes that we are trying to predict.
This also seems relatively straight forward. However, Deutsch has an extremely broad scope of what measurable outcomes can be impacted by future knowledge discoveries: Practically all of them. Deutsch’s momentous dichotomy: There are no barriers to what knowledge can be created except for violating the laws of physics.
Even if we don’t want to swallow whole the momentous dichotomy, we can at least concede that there are many outcomes that can be incontroversiably impacted by future knowledge.
Insofar as the thing that we are predicting can be influenced by future knowledge, the thing we are predicting is unpredictable. After all, we cannot predict future knowledge.
Looking at a specific example
Let's be concrete. We’ll use climate change as an example. Although with some hesitancy as it’s so politically charged4. Nevertheless, it’s helpful because it’s well known.
As we know, we can look at historic global temperature trends and conjecture that the recent increase is human induced. So far so good. We may then forecast future global temperatures by extrapolating these trends. This is what many scientists do. We may also forecast impacts from these trends. For example, flooding and devastation to wetlands. Economists may forecast impacts on GDP.
According to Deutsch's argument, pertaining to know these forward looking things (even probabilistically) is tantamount to prophecy. Whether we’re predicting total destruction or mere inconvenience, we are larping as soothsayers.
Why? It’s because the impacts of global warming can be influenced by future knowledge. In fact, whether the actual temperature rises itself can be influenced by future knowledge. Namely, we can create knowledge to change the climate. There is no law of physics preventing this. Indeed, if you are concerned about anthropogenic climate change, you agree: humans can change the climate.
This is not to say that we will create requisite knowledge to cool the planet. Merely that we can. Whether we do or not, nobody knows. We can’t know what future knowledge will be created before it’s created.
Forecasting that the climate will rise by 2.5 degrees by 2050 and impacts will be X, Y and Z is equivalent to saying that if we don't solve the problem, we won't solve the problem.
This is not to say that such forecasts are not valuable. Quite the contrary. It’s an crucial exercise as long assumptions are laid out clearly and it’s casted as an if-then statement.
The impacts from climate change entirely depend on whether we solve problems posed by climate change.
The world will continue to warm if we don’t do anything about it.
Islands will be flooded if we don’t do anything about it.
GDP will be lost if we don’t do anything about it.
So we probably should do something about it. We just can’t predict what we’ll do. Why not? Future knowledge and all that.
We may respond that our predictions account for this. Namely, we are predicting (perhaps probabilistically) that as a society we won’t do anything about it. Coordination is hard. Incentives are misaligned. People are selfish. Alternatively, we may predict that we will do something about it. We solved the whole hole in the ozone layer problem after all.
At risk of repeating myself, the problem in both cases is that we simply don’t know what future knowledge will be discovered (including political knowledge to solve coordination problems). We don’t know this probabilistically either. It’s not like we could have known in 1850 that there was a 65% chance nuclear energy would be discovered 100 years later. We wouldn’t have known what nuclear energy was. It hadn’t been invented yet5!
What predictions are legitimate?
As we’ve seen, if-then styled predictions are fine. Indeed, they are the essence of scientific experiments. However, predicting what problems will or won’t be solved is not fine. Many illegitimate predictions tacitly assume problems won’t be solved.
Another way for predictions to be legitimate is if the explanation that they are based on takes account of future knowledge. This is not to say that it predicts what it will be. Rather, that the prediction doesn’t hinge on it. For instance, Robin Hanson and David Deutsch discuss predictions of the stock market. Namely, that it will continue to take a random walk. They agree that this prediction does not depend on the future growth of knowledge. Even when knowledge accumulates, the stock price takes a random walk as it is self-correcting.
An example of an illegitimate prediction is to estimate the likelihood of the extinction of the species. Unforunately, this seems to be a common past time among rationalists. The reason it’s illegitimate, you guessed it, is future knowledge: Whether or not we go extinct depends on what future knowledge gets discovered.
Will we all die at the hands of meteors, robots, or something else? Toby Ord estimates it at a sixth. In reality, we can’t know if there’s a 16% chance or an anything-percent chance. We can’t know the likelihood of creating the requisite knowledge to solve these particular problems.
Beware of naive base rates
Another failure mode in predictions is to blindly base them off historic base rates. We must have an accompanying explanation that tells us why these historic base rates apply regardless of what future knowledge is created.
Using base rates to predict the probability that a civilzational-ending meteor will hit in the next 100 years is fine… with the massive caveat that this is the counterfactual that we don’t do anything about it.
Predicting the probability that a civilizatoinal-ending meteor will hit and we won’t do anything about it so civilization will very much end is not fine.
The earth may be the one place in the universe where meteors are deflected away rather than drawn towards planets via gravity. This would be due to the knowledge that exists on earth. Namely, knowledge about how to deflect meteors.
Let that sink in. A meteor may never hit earth again. Entirely due to the knowledge that us simple humans create. This is the sort of optimistic scenario that Deutsch reminds us is possible. It just requires new knowledge.
Scientific hypotheses aren’t mere extrapolations from historic trends a la induction. As Hume pointed out almost three centuries ago, it does not logically follow that the future resembles the past. To take the classic example of why we know the sun will rise tomorrow: It’s not because it rose yesterday. Our prediction is based on our explanations of the orbit, tilt, and rotation of the earth. This explanation encompasses the past and the future. In fact, it does not predict the sun will always rise. Our current theory says that the sun will eventually run out in 7 billion years, give or take6.
Nagging uncertainties
Admittedly I’m uncertain whether the period length of the prediction is relevant to its legitimacy. Naively, the longer out we are predicting the more room there is for future knowledge to prove the prediction wrong. For instance, it seems unlikely we will cool the planet in 5 years7. In 30 years, who knows. Hopefully.
This is the paradox at the core of longtermist concerns. Future lives matter. We can even concede that future lives matter as much as current lives. We jusy can't know what will happen to future lives. We don’t know what the path of knowledge discovery will be. As Bronowski said: Knowledge is an unending adventure at the edge of uncertainty.
Rationalists point to areas that are likely to help. Stable institutions. Economic growth. Mitigating known existential risks. Deutsch argues that we should focus on ways to ensure knowledge growth. It’s the only way to solve problems that we don’t know about yet.
Admittedly, I do have this rationalist on my other shoulder telling me that predictions have value. Even these pesky ones that can be impacted by future knowledge. We can’t know everything that will happen in the future exactly, but we can know some things. Our predictions can be better than mere noise.
For instance, the book Superforecasting showed that some people consistently predict things better than chance. A major method is basing predictions on historic base rates. Don’t tell Deutsch! They call this using the “outside view”. That’s often what being a Bayesian is: Don’t forget the base rates!
People may overestimate the chance of a terrorist attack in the aftermath of 9/11. Yet if we apply the base rate of the last 50 years as our outside view we will have a more conservative prediction (which turned out to be more accurate in this case).
How do we then evaluate if our prediction was accurate or if we merely got lucky. This is a thorny problem. Rationalists and superforecasters use calibration. Namely, we evaluate how often your predictions were correct to determine if they were well calibrated. How often were our predictions with 70% likelihood correct? If they were right 70% of the time they were well-calibrated. If they were right more often than that, we would be under confident.
Now there are plenty of wrinkles to sort through for rationalists. What counts as the correct outisde view to use? Which set of predictions do you include in your calibration? But Deutsch’s criticism seems to be more fundamental. We shoudn’t be using probabilities to predict things that are impacted by future knowledge. They’re prophecies.
But if you’re consistently making money off of your predictions it’s hard to call them prophecies. Well, we can, but the alleged prophets will be laughing to the bank.
Deutsch’s criticism seems related to Taleb’s notion of the ludic fallacy. Namely, probabilities can apply in casinos but not in the real world with human interactions full of unknown unknowns. We may grow a false sense of confidence in our well-calibrated predictions coming in one after another and then suddenly an unforeseen black swan event occurs.
However, we can’t apply these game like probabilities to the real world.
The real world contains humans. Humans create knowledge. Knowledge is unpredictable.
This claim, like many of Deutsch's, originally comes from Karl Popper.
Unfortunately, many people don't seem to like uncertain ranges. Lyndon B Johnson summed up this attitude best when lambasting one of his analysts: “Ranges are for cattle, give me a number!”
Yud’s advice in the classic Politics is the Mindkiller: Avoid citing political examples when discussing rationality: It’s hard to be rational.
Another potential objection is that we may not know the contents of future discoveries but we know what direction that might be. For instance, we wanted to fly long before we discovered flight. We were interested in the atom long before the discovery of nuclear energy. This doesn’t really help us: even if we’re interested in an existing problem we don’t know what the solution will look like (otherwise we’d have the solution). We don’t know what the implications will be. And finally, we don’t know unknown future problems.
Even here, this assumes we won’t keep it going. We may create requisite knowledge to do so by then.
On this question, Deutsch thinks we could cool the planet fairly rapidly if we wanted to. Imagine a thought experiment where an alien civilization reliably threatens to destroy the planet in 12 months unless we cool the planet by 2 degrees. Could we do it?
> Admittedly I’m uncertain whether the period length of the prediction is relevant to its legitimacy. Naively, the longer out we are predicting the more room there is for future knowledge to prove the prediction wrong. For instance, it seems unlikely we will cool the planet in 5 years. In 30 years, who knows. Hopefully.
We will not *cool* the planet in 30 years. Excellent would be to stop emissions first. Then, soaking the existing excess carbon out of the atmosphere in 30 years is still much less probable. But even then the Earth wouldn't be cooled to pre-industrial levels, because of the inertia of the ocean temperature and the reduced albedo due to smaller snow/ice cover.
However, the main point I want to make regarding this passage is that it seems that in fact you do engage in Bayesianism when you predict that carbon emissions won't be solved in five years. You judge that it would likely require not a single invention, but a series of inventions: in fusion energy generation, in energy storage, in robot intelligence to staff the factories for producing energy storage capacity (there is a huge shortage of skilled human workforce in this area, and humans are slow to learn things), in rapid factory design and construction, rapid mining ramp up. Realistically (and there is a Bayesian argument behind this word, too, but I won't unpack it here), most of this is impossible without AGI, so you would need to implicitly condition on the invention of AGI, too.
So I don't see any extra content in this Deutsch's argument on top of his disagreement with Bayesianism, which you covered before.
I agree that Bayesianism doesn't look like a principled approach to predicting the future. However, is there any approach which gives better practical results? We know that Bayesianism does lead to better than chance results even when applied to future predictions. And we need to make decisions and prioritise efforts today, based on *some* estimates of relative probabilities of future events.