Undecidable ≠ Unknowable: Why Physics’ Predictability Limits Are Overstated
Contra Quanta's "Next-Level Chaos" article
Quanta recently published an article titled “‘Next-Level’ Chaos Traces the True Limit of Predictability” which I think gets some details wrong. The gist of the article is that there exists physical scenarios whose futures are “undecidable”. Here, “undecidable” is a specific technical term in computer science. The article tries to lay-person explain undecidability as meaning “that certain questions simply cannot be answered” which becomes the clickbaity summary “it’s unsolvable, and you can prove that,”. The reality is more complicated, but I’m going to have to introduce some background concepts first.

I.
Every real number has an infinite decimal expansion. For example, the number one has the infinite decimal expansion 1.00000000… with zeroes repeating forever. We can ask questions about these infinite decimal expansions. For example, we might ask “What is the first digit after the decimal point?” It’s zero, which we can just visually see from looking at the number. What’s the third digit after the decimal point? It’s also zero. What about the 69th digit after the decimal point? It’s still zero again. We know this because we know that every digit after the decimal point is zero, so the 69th digit after the decimal point is surely also zero.
Now consider the fraction 140/333. If you do the long division, you get that the decimal expansion looks like 0.4204204204… with the digits 4, 2 and 0 repeating in that order forever. What’s the first digit after the decimal point? It’s 4. What’s the third digit after the decimal point? It’s zero. What’s the 69th digit after the decimal point?
This one requires a little more thinking. The pattern is obvious. The 1st, 4th, 7th, etc. digit is always gonna be 4. The 2nd, 5th, 8th, etc. digit is always going to be 2. The 3rd, 6th, 9th, etc. digit is always going to be 0. So if we’re asking about the Nth digit, and N is divisible by 3, then the digit is going to be 0. The number 69 is divisible by 3, so the 69th digit is going to be 0.
II.
What about something like the square root of 2? There are a couple of algorithms for calculating square roots, but the simplest one is probably “guess and check”. The idea is that you just make a guess as to what the square root of 2 might be, and you check if you answer is correct by multiplying it by itself and seeing if you get back 2. If you get a number bigger than 2, then your guess was too high. If you get a number smaller than 2, then your guess was too low.
So for example, we might first guess that the square root of 2 is 1, about half of two. We check our answer by calculating 1x1=1, which is smaller than 2, so our guess was too low. Okay, maybe the square root of 2 is 2. We check 2x2=4 which means our guess was too big. So we know the square root of 2 is somewhere between 1 and 2. Maybe it’s 1.5? 1.5x1.5=2.25, so it’s smaller than 1.5, which means the square root is somewhere between 1 and 1.5.
Already from this, we know that the first digit before the decimal point must be 1, because all real numbers between 1 and 1.5 have a 1 before the decimal point. That is to say, we can already make confident assertions about certain digits about the square root of 2 even if we don’t know exactly what value it has. Let’s keep going. Maybe the square root is 1.25? 1.25x1.25=1.5625, so we know 1.25 is too small. Here’s a table where we keep going like this:
So from the progress we made so far in this table, we know the square root of 2 is somewhere between 1.4140625 and 1.414550781. This is enough to let us conclude that the first digit after the decimal point is 4, and the 3rd digit after the decimal point is also coincidentally 4. We don’t yet have enough information to figure out what the 69th digit is, but surely we could in principle figure it out, if we just kept going along with this process. Surely the question has some objectively true answer, even if we don’t know what that answer is.
III.
We can ask, of statements, whether they are true or false. For example, consider the statement “1+1=2”. This statement is true. What about the statement “1+1=3”? That statement is false.
What about “The 69th digit of the infinite decimal expansion of the square root of 2 is 7”? I don’t know whether that statement is true or false, but I do think there is an objective answer to the question: It’s either true or it’s false. I would not say that the question has no answer, or is unanswerable or anything like that, just because I happen to not know what the answer is.
With computers at our disposal, it’s not too hard to find out what the 69th digit of the square root of 2 is, and thus to find out the truth value of the statement. Wikipedia tells me that as of December 26th, 2023, we know up to the 20’000’000’000’000th digit. It wouldn’t surprise me if humanity eventually figures out a million times more digits, i.e. the 20’000’000’000’000’000’000th digit. But I’m not confident humanity will ever know the Googolth digit, or the Googolplexth digit. There are upper limits to how much computation can be done in a given time period (if you try to do more, the energy (AKA mass) required would cause your computer to collapse into a black hole or something), which means there is an upper limit to how many digits you could calculate before you run out of time due to the heat death of the universe.
So it gets a bit more fuzzy and philosophical here: Is there an objective truth value to the question “the googolplexth digit of the square root of 2, is 7”, given that the laws of physics seemingly prevent humans knowing what that truth value is? I believe the general consensus (and my personal opinion) is “yes” (though I can appreciate opposing viewpoints here): We might not know whether the truth value is “true” or “false”, but the truth value is definitely either “true” or “false”.
There are statements which don’t have an objective truth value in the above sense. For example, the statement “This statement is false”. You get an inconsistency whether you try to assign the truth value “true” or the truth value “false” to it. So this is a statement that is neither true nor false.
IV.
Definitely during the early 20th century, but possibly as far back as around 300BC (around the time of Euclid and Aristotle), mathematicians have dreamed of automating mathematics. It takes “human creativity” to come up with proofs for theorems. What they were hoping for is to figure out some way to start with a list of axioms, and just apply some mechanical rules to them (like Aristotle’s rules of syllogism: “if A is true and A implies B, then B is true”) to enumerate all the logically deducible consequences of those axioms. You would know all provable theorems in mathematics, just like we know all the digits of the decimal expansion of 140/333.
In the 1930s and 1940s, this was proven impossible by Kurt Godel (via the Incompleteness Theorem) and Alan Turing (via the Halting Problem), though there were earlier, partial hints of this result from Bertrand Russel and others.
Godel proved that within any axiomatic mathematical system, there will exist true statements which cannot be proven true within that system1. The classic Godelian statement is “This statement has no proof”. If you could prove that the statement were true, then it’d be false. So no such proof would exist (assuming your system is consistent and thus does not contain proofs for false statements). Which means the statement is true.
Alan Turing came up with a model of computation called “Turing Machines” (he didn’t name it that; he was modest; we-as-in-humanity named it that) that is roughly equivalent to our in-real-life, physical computers2. Around the same time, he also came up with the “Halting Problem”, which showed that it is impossible to write an algorithm that takes, as input, another algorithm and determines whether that second algorithm halts in a finite amount of time (as opposed to going into an infinite loop). Henry Rice then extended this result to “Rice’s Theorem”, which says it is impossible to decide whether any interesting property is true for a given algorithm.
It would have been nice if Rice’s Theorem weren’t true: We would have created something like a “spell checker” for programming languages. You’d take your source code and put it into the “spell checker” and it would tell you if the program ever went into an infinite loop, or if it ever leaked personal data, or if it ever crashed, and so on. Unfortunately, it’s impossible to do that in the general case: there must exist some source code where your spell checker will either give the wrong answer, or else it will itself go into an infinite loop or crash or something.
The proof is as follows: Let’s say you did solve the halting problem, and so you have some function doesHalt(f)
, where you pass in a function f
, and doesHalt
return true if f
halts, and false if f
does not halt. You would then write a new function, call it g
, which says:
function g():
if doesHalt(g):
loop forever
else
halt
That is, the g
function calls doesHalt
, passing in itself, and then does the reverse of whatever doesHalt
says it does: if doesHalt
says that g
halts, then it doesn’t. If doesHalt
says that g
doesn’t halt, then it does.
(A completely unnecessary aside (feel free to skip this parenthetical): It would seem like all of these things are variations on “This sentence is false”. Godel’s “This statement has no proof”, Turing’s “This program halts if it doesn’t halt”, Bertrand Russel’s “The set of all sets that does not contain itself”, etc. Naively, one might think the problem is self-referentiality. Noson Yanofsky has a fascinating paper showing that this is not the case. Rather, the problem arises when certain conditions are satisfied about a fixed-point. It’s just that self-referentiality is often the easiest way to force a fixed-point with the desired conditions.)
V.
Now here’s the thing: For every computer program, when you run it, it either halts, or it doesn’t. That is to say, there is an objective answer to the question “does this computer program ever halt?” The Halting Problem doesn’t prove that there is no answer to the question. What it proves is that it’s impossible to write an algorithm that runs on a Turing Machine that will answer this problem. The general consensus is that our human brains are about as powerful as Turing Machines (that is to say, our minds cannot think of anything that Turing Machines could not also, in principle, compute), but as far as I know, this hasn’t been proven.
The Halting Problem is also relative to a particular computation model. A Turing Machine cannot solve the Halting Problem for Turing machines, but there are more powerful computation models. One example is “A Turing Machine with a Halting Problem Oracle”, which I’ll call “o1” for short (no relation to the ChatGPT model of a similar name).
o1 can solve the Halting problem for Turing Machines. That is to say, given the source code for any program or function written to be run on Turing Machines, o1 can determine whether or not that program will eventually halt. However, o1 has its own Halting Problem: It is impossible to write an algorithm that decide whether any program or function written to be run on o1 machines will eventually halt.
But we can invent even more powerful models of computations than that: o2 machines can solve the o1 halting problem, but can’t solve the o2 halting problem. o3 machines can solve the o2 halting problem, but not the o3 halting problem, and so on.
Maybe human brains are more powerful than Turing machines (and thus can solve the halting problem for Turing machines). Maybe human brains are equivalent in power to o420 machines. Who knows?
VI.
Okay, we’re finally ready to get back to the Quanta article. The Quanta article says a physicist basically constructed a pinball machine that encodes a Turing Machine. That is to say: you place your ball and pins a certain way, and then you let the ball bounce around and eventually land somewhere, and wherever it lands represents the results of the computation.
I’m not sure what the novel idea being presented here is. In 1982, Edward Fredkin and Tommaso Toffoli demonstrated that you could encode a Turing machine using billiard balls.
The argument presented in the Quanta article is that there exists undecidable problems in the Turing Machine model of computation, with the Halting Problem being one such example. Turing Machines cannot in general “predict the outcome” of running some Turing Machine source code (e.g. it cannot predict whether the corresponding program will halt or not). We (in principle, perhaps having some simplifications like ignoring friction etc.) can construct physical machines that encode Turing Machines. Because Turing Machines cannot predict the outcome of Turing Machines, Turing Machines cannot predict the outcome of these physical machines. Therefore, the laws of physics (even just Newtonian physics—the billiard ball computer doesn’t depend on quantum phenomena or relativity or anything advanced like that) are undecidable. Therefore, there exists things which are unknowable/unsolvable to humans, even in principle.
But the billiard ball computer is going to do some specific, deterministic thing! The underlying Turing Machine is going to perform some specific, deterministic computation! And we, as humans, can observe and/or model the machine and how it’ll evolve over time! And maybe it’ll be “difficult” (i.e. take some computation) to figure out what the state will be 100 time-steps into the future, or a googleplex time-steps into the future, or whatever. There’s nothing mysterious or mystical about that!
I’m trying not to put words in Quanta’s mouth here.
The article references Laplace’s Demon, which is a hypothetical infinitely intelligent creature used in a thought experiment. The idea is that if this demon knew the position and momentum of every particle in the universe, it could fully predict everything that would ever happen. Why a “demon” and not a “human”? Presumably because humans have finite intelligence: Even if you gave a human the position and momentum of every particle in the universe, probably the human would have trouble keeping track of them all in their working memory and would not be able to perform the calculations necessary to make the proper predictions. We’re supposing here that the demon is hyper intelligent, and is able not only to hold all these numbers in its working memory at the same time, but can also perform all the necessary calculations with infinite precision and accuracy, so as to be able to make the correct predictions.
Quanta then says things like:
In recent years, a third limitation has been percolating through physics — in some ways the most dramatic yet. Physicists have found it in collections of quantum particles, along with classical systems like swirling ocean currents. Known as undecidability, it goes beyond chaos. Even a demon with perfect knowledge of a system’s state would be unable to fully grasp its future.
(emphasis added)
“I give you God’s view,” said Toby Cubitt, a physicist turned computer scientist at University College London and part of the vanguard of the current charge into the unknowable, and “you still can’t predict what it’s going to do.”
(emphasis added)
Undecidability means that certain questions simply cannot be answered.
These examples “place major limitations on what we humans can come up with,” said David Wolpert, a researcher at the Santa Fe Institute who studies the limits of knowledge but was not involved in the recent work. “And they are inviolable.”
(emphasis added)
Even for someone with complete knowledge of the machine and unlimited computing power, certain questions regarding its fate remain unanswerable.
I think Quanta (and the scientists whose view Quanta is presented) are just plain wrong here, and they are wrong on multiple different claims.
Fallacy: Undecidability means that certain questions simply cannot be answered
The Halting Problem is the classic example of an undecidable problem, but the question “Does this computer program eventually halt?” has an objective answer. It can be answered. It just can’t be answered by a Turing Machine.
Fallacy: Undecidability means that humans cannot know the answer
This relies on the unstated assumption that human minds have equal (or lesser) power than Turing machines. This is widely believed to be true, but has yet to be proven. Perhaps human minds are more powerful than Turing machines, and thus could solve the halting problem for Turing machines.
More to the point, consider the Godelian statement “This statement has no proof”. The undecidability is relative to a particular axiomatic system, so the full statement should really be “This statement has no proof within the axiomatic system ZFC” for example, if we were working in the ZFC system. ZFC cannot prove that statement to be true (because ZFC is consistent, so it does not contain any proofs of false statements, and if it did contain a proof for that statement, then that statement would be false). So there is no proof of that statement in ZFC. Which means the statement is true.
We, as humans, know that the statement is true, even though we cannot prove it in ZFC. Instead, we’re proving it outside of ZFC. I presented a proof, just now, using informal English words. Our human minds are “more powerful” than ZFC in that we can consider different/alternate axiomatic systems and thus can prove things that are true (within those other systems) that you would not be able to prove within ZFC.
Perhaps our human minds are similarly “more powerful” than the laws of physics such that we can know the truth value of all statements within physics, even if those statements cannot be proven within physics itself.
Fallacy: Even if you were Laplace’s demon (or even if you had infinite computing power), you could not solve undecidable problems
Let me get a pedantic disclaimer out of the way right off the bat: If you make a false assumption, you can prove anything (for example, you can simultaneously prove both a statement and its negation). Laplace’s demon, and having infinite computing power, is physically impossible as mentioned earlier due to laws-of-physics limitations on how much computing is possible. But let’s handwave all of that away and try to reason counterfactually about what would be possible if we were Laplace’s demon and/or had infinite computing power anyway.
If you were Laplace’s demon, then you essentially have access to infinite computing power, which is to say, you are capable of doing an infinite amount of computation in a finite amount of time. Here’s the proof: Either the universe is infinite, or it is not. If it is not, then the universe can be represented by a DFA (see footnote 2) which is simpler than a Turing Machine, and we can actually solve the Halting Problem for DFAs anyway. So for the rest of this proof, let’s handle the case where the universe is infinite. You are, by presupposition, able to predict the motion of some particle, let’s call it X. In order to have done that, you have to have calculated the gravitation impact of every other particle in the universe on the particle X. There are infinitely many other particles, and so you were somehow able to do a calculation that contains an infinite amount of computation within a finite amount of time.
Once you are able to do an infinite amount of computation in a finite amount of time, the Halting Problem is solved: Just simulate {running the program for an infinite amount of time} within a finite amount of time, and see: at the end, did it halt or did it keep looping?
VII.
So what’s the takeaway? I think it’s something like these:
Undecidability is interesting, and complicated. Maybe this is a topic you’ve never heard of before and wish to learn more. If so, I recommend Michael Sipser’s textbook “Introduction to the Theory of Computation”. It’s very accessible to computer scientist. If you’re not a computer scientists, the prerequisites are familiarity with mathematical proofs, and perhaps some general programming experience to develop an intuition around what is/isn’t reasonable/possible with algorithms.
The Turing Machine is an extremely prevalent model of computation. It’s so prevalent that, like the fish who don’t notice the water they’re swimming in, scientists might forget that there are other models of computation and forget the possibility that human minds and/or the laws of physics might be using those other models of computation!
If the expressiveness of human thought is better expressed by a different model of computation (inb4 the “human brains are finite, therefore DFA” pedantry from footnote 2), it would be groundbreaking for someone to prove that this is the case. It would also be groundbreaking (though probably not “surprising”) for someone to prove that actually human brains are just as expressive as Turing Machines.
Similarly, it’d be ground breaking to prove that the laws of physics are best modeled by Turing machines or by something other than Turing machines.
Heck, it’d even be ground breaking to prove that the best model of computation for the laws of physics is different than the best model of computation for human brains, even if we couldn’t actually identify what either of those models were—just that they differed. Proof that they actually use the same model would also be a valuable contribution, though probably seen as “obvious in hindsight”: Wouldn’t brains evolve to be suited to understand the laws of physics of the world that they need to survive within?
If pursuing those questions sounds intriguing to you, I recommend Roger Penrose’s book “The Emperor’s New Mind”. It’s very accessible to a general audience with a high-school education. If you were able to follow the algorithm I presented above for computing the square root of 2, you know enough math to understand Penrose’s book.
A couple of minor technical loopholes to Godel’s proof. First, it only applies to systems sufficiently powerful to represent multiplication (which is a pretty low bar). If your mathematical system can only represent addition, for example, you can escape Godel’s incompleteness theorem. Second, you can also escape if your system is inconsistent: that is, your system can prove both true and false statements. Usually people aren’t interested in such systems.
This is very hand wavy, but is “sufficiently correct” for the purposes of this article. The first level of pedantry is to point out that Turing machines have infinite tape whereas all in-real-life computers have a finite amount of memory. The appropriate computational model for finite memory is Deterministic Finite Automaton (DFA), and the Halting Problem does not apply to DFAs. Th next level of pedantry is to point out that Turing machines only need unbounded storage, not necessarily infinite storage, and the way modern software development works, there’s more and more reliance of cloud storage, which provides the abstraction of unbounded storage (e.g. Amazon AWS is responsible for building new data centers as needed, your algorithm doesn’t need to handle that).
