How we discovered the speed limit of arithmetic – and broke it

Spread the love


New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

Did you hear the one about the man who invented chess and got himself executed? Legend has it that a man called Sessa, who lived in India long ago, developed the rules for the game and presented them to a king. The king was delighted and offered the man his pick of reward.

Sessa asked for a supposedly humble quantity of rice. Just one grain on the first square of a chessboard, twice that on the second square, double that again on the third and so on for all 64 squares. But he was too clever by half. Do the sums, and this was more rice than the global harvest over the past century. The king didn’t find it funny and had him put to death.

This legend has always served as a cautionary tale about the power of exponential growth – but, frankly, that has nothing on what I’m about to tell you. Because it turns out exponential growth is a veritable laggard. Researchers have discovered mathematical processes that grow outrageously faster, quickly producing numbers so huge that Sessa’s chessboard of rice – 18 quintillion grains for those taking notes – barely registers.

These hyper-accelerating processes are more than just mind-bogglers. They also violate long-standing theoretical speed limits, which means that studying them plays a critical role in our understanding of the logical underpinnings of numbers themselves.

As a mathematician and writer, I find it amazing how often people through history reasoned with numbers far bigger than any practical purpose requires. At archaeological sites from ancient Babylon, we unearth tablets on which scholars meticulously calculated values as high as 911 x 1239 (which adds up to more than the number of atoms in planet Earth). Archimedes once computed the number of grains of sand it would take to fill the universe. And in Central America, the classical Maya contemplated timescales of octillions of years, vastly longer than the age of our universe.

These pioneers are partly why I wrote my book, Huge Numbers. But one of the most fascinating stories I came across was much more recent, concerning how fast-growing sequences of numbers impact my own field of study, mathematical logic, a discipline that analyses mathematical proofs. These are watertight demonstrations that something is true, constructed from a seamless series of logical deductions. They can be devilishly hard to devise, but once built, they are true forever. This makes them our most solid, enduring form of knowledge, objects of jealousy for scientists in other fields.

But here’s the thing: proofs must start somewhere. These initial assumptions are the axioms of mathematics, which we are forced to say are self-evidently true. In the late 19th century, logicians – including the Italian thinker Giuseppe Peano – began to contemplate a question of profound importance: what are the axioms on which our number system should be built? Peano’s answers focus on what’s called succession, the process that carries one number to the next: 0 to 1, 1 to 2, 2 to 3. His insights include the observation that if two numbers have the same successor, then they must have been the same number to begin with. Hardly one of history’s great revelations. Yet succession is how mathematical truths propagate up through the number system. Start from here, and you can build addition, subtraction, multiplication and division – Peano had got to the heart of arithmetic.

Squirrel monkeys climb on an abacus

Arithmetic involves some of the simplest mathematics out there: addition, subtraction, multiplication and division

Leon Neal/Getty Images

But soon there was a cloud in the sky. In 1931, Kurt Gödel unveiled his famous incompleteness theorem, a proof showing that humans would never be able to write down an exhaustive rulebook for arithmetic. This means that Peano’s rulebook (and any conceivable replacement) can’t be fully comprehensive; there are true facts about numbers that cannot be derived from it. For logicians, this was a profound shock. Yet in the years that followed, they found Peano’s rulebook generally held firm. As Gödel had guaranteed, it did break down in places, but only in areas accessed through arcane logical trickery rather than ordinary mathematical research.

One seldom-noticed consequence of Peano’s rulebook is that it imposes a speed limit on the mathematical processes we can handle. I say seldom-noticed because, for most of mathematical history, this limit lay far beyond anything even professional mathematicians needed to worry about. But that has recently started to change.

The Goodstein metasequence

The first hint of the speedometer creeping up was a sequence discovered by Reuben Goodstein in the 1940s. Pick a starting number. Let’s say 19. Write this in base 2 to get 24 + 2 + 1. Before we get under way, we also need to rewrite the indices in base 2, so that the only visible digits are 1s and 2s: 222 + 2 + 1. We are now ready for Goodstein’s two-step process. Step one, replace every 2 with a 3. Step two, subtract 1. This gives us: 333 + 3. Then we go to the next entry in the sequence, this time replacing every 3 with a 4 and subtracting 1.

This is undeniably a fast-growing process: the first three entries are 19, over 7 trillion and then a number greater than 1010,000,000. But Goodstein’s surprising discovery in 1944 was that if you continue repeating the two-step process long enough, the sequence of numbers eventually stabilises, decreases and returns to zero. We can see this if we begin from a smaller number, such as 2. This sequence runs: 2, 2, 1, 0. If we begin on 3, it takes six moves to hit zero. What about starting on 4? Goodstein’s finding still holds, but it now takes more than 10100,000,000 moves to get back to zero.

What we have just been describing is the Goodstein metasequence, the sequence of lengths of successive Goodstein sequences. This turns out to be a mathematical process that breaks the usual arithmetical speed limit imposed by Peano’s rules. Just its sixth entry (the length of the Goodstein sequence starting from 6) is in the realm of numbers that even the huge-number explorer Donald Knuth described as “beyond comprehension”. Let’s imagine we tried to describe it using a tower of exponentials, along the lines of 101010, but with the tower of 10s rising up and up. That tower would have to be so tall that its height could be described only by another tower, whose height is given by another tower, and so on, repeating this formula for longer than the lifetime of the universe. And all of this, remember, for just the sixth entry of the Goodstein metasequence.

Reverse mathematics

Normally, mathematicians begin with a conjecture – that is, a mathematical statement they believe to be true – and attempt to prove it. But in 1982, Jeff Paris and Laurie Kirby asked the inverse question about Goodstein’s work. They took his proof that the sequence would always return to zero and asked what axioms were required. The answer, it emerged, was that Peano’s axioms weren’t enough. This was big news. Goodstein’s theorem was of wide interest within mathematics and yet it was the first concrete example of the incompleteness Gödel had warned of, with no logical contrivances in sight.

This was a dramatic early discovery in a subject that later became known as reverse mathematics. In the hands of trailblazing logician Harvey Friedman, it turned into a fully fledged research programme. None of its results was more spectacular than that concerning the graph minor theorem.

Proved over the course of 20 technical papers by Neil Robertson and Paul Seymour between 1983 and 2004, the graph minor theorem is a milestone of modern mathematics, transforming the study of abstract networks known as graphs. A graph consists of a finite number of nodes, some of which are joined by lines called edges. Structures like this arise everywhere from molecular chemistry to the world wide web, touching upon just about every branch of science.

A “minor” in this context is a smaller graph obtainable from the larger one by a combination of simple actions such as deleting edges. Minors are to their parent graphs what concrete and steel are to a skyscraper – the mathematical skeleton. They have been vital to our understanding of networks since the 1930s.

Electrical pylons at sunset

Graph theory has been useful for modelling complex networks of all kinds, including energy infrastructure

Anton Petrus/Getty Images

Robertson and Seymour’s landmark theorem showed that if you keep drawing a sequence of graphs, then, whatever you do – whether proceeding at random or according to some careful recipe – sooner or later, you will produce a pair where one is hiding inside the other as a minor. In other words, it is impossible to produce an infinite collection of finite graphs where none is a minor of any other. This is far from obvious, and its consequences have been profound. In fact, the proof birthed a whole new field of mathematics called structural graph theory, which has, in turn, generated a powerful toolkit for gauging the complexity of all sorts of networks, from transport networks to electricity grids.

Before we go on, it is worth saying that, despite possible appearances, we haven’t ventured beyond the bounds of arithmetic here. Graphs might involve diagrams, but they are still eminently describable by simple numbers and arithmetic. In that context, the graph minor theorem also had profound implications for the foundations of maths. When Robertson and Seymour teamed up with the inventor of reverse mathematics, Friedman, they established that any proof of the graph minor theorem necessarily means venturing beyond the standard axioms of arithmetic – and this time, the required laws don’t sit just slightly outside the usual gates, as Goodstein’s metasequence does, but lie deep in the logical wilderness. From this insight, in 2006, Friedman discovered one of the fastest-growing sequences known in mainstream mathematics so far (see “The mighty subcubic graph sequence”).

Understanding axioms

To get our heads around just how starkly graph minor theorem breaks the speed limit, we need to delve a little deeper into the systems of axioms that have been developed since Peano’s day. Roughly speaking, there are five levels of axioms of increasing complexity, with the modern version of Peano’s rulebook sitting at level three. The two rulebooks above this are known as the arithmetical transfinite recursion and Π1,1 comprehension. While Peano’s logic is based purely on the properties of numbers themselves, the more advanced rules revolve around “sets”.

A set is simply a collection of numbers. You might have the set of all numbers that end in a 3, for example, or prime numbers. But beyond easily describable sets like these lies a morass of infinite sets of numbers that defy any simple characterisation. Sets are an abstract idea that have long brought power to mathematics. By basing the rules of arithmetic on them – especially sets that are individually random or hard to access – the logical power of the rulebook is increased, and higher speed limits are allowed.

Now, with the exception of Goodstein’s theorem, Peano’s rules were all that were required for ordinary arithmetic. The higher levels of rules were developed as the foundation for much more sophisticated branches of maths. Still, Friedman, Robertson and Seymour showed that the graph minor theorem – based as it is in arithmetic – blows past all five of these levels. In 2019, my colleague Michael Rathjen at the University of Leeds, UK, and his student Martin Krombholtz investigated just how far the usual rulebooks need to be expanded to accommodate the graph minor theorem, and the sky-scraping sequences it generates. The answer, roughly, is that it takes us two levels further on – far beyond anything required by any mainstream mathematical activity. This is a rulebook for mathematical rocket ships.

What I find fascinating is that this logical depth springs from collections of dots and lines, easily describable by ordinary whole numbers. Intuition says it should be well within Peano’s remit. In this tricky logical area, though, our intuition is as shaky as that of the mythical king who failed to spot Sessa’s exponential growth.

For me, mathematics has always been about exploring the boundary between simplicity and complexity. Our journals are full of extremely complicated mathematical objects, but, unexpectedly, almost all of them can ultimately be boiled down to Peano’s simple rulebook. Truly irreducible complexity, mathematical proofs and structures that require elaborate infinite sets at an essential level, is far rarer. That such an important field of research as structural graph theory should turn out to be an instance of this profound complexity is, in my view, one of the most striking developments in mathematical logic since Gödel’s theorem.

One of the fastest-growing sequences of numbers ever discovered involves the mathematical concept of graphs. These consist of a number of dots (called nodes) and lines (called edges). As its name suggests, the subcubic graph sequence considers only subcubic graphs, meaning those whose nodes all have no more than three “exits” along lines.

We are interested in cases where one subcubic graph is hiding inside a larger one as a minor, in that it can be obtained by a combination of simple moves: deleting nodes (with all their accompanying edges), deleting individual edges and merging two nodes by shrinking away their connecting edge.

Now, here’s the question. How long a list of graphs can we come up with under the following rules: the first can have at most one node, the second at most two, the third at most three, and so on, and no graph is hiding as a minor inside any later one? Puzzle a while, and you will find the best you can do is six. The final graph in this sequence is the “empty graph”, with no nodes or edges. We refer to the length of this sequence as SCG(0), and we can therefore say SCG(0) = 6.

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

But now, we adjust the question by granting ourselves a head start. This time, the first graph can contain at most two nodes, the second at most three, the third at most four, and so on. All the other rules remain the same. What is the longest list we can build now? We might get started like this:

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

So, how long is this list, which we refer to as SCG(1)? What about SCG(2)? This progression is the subcubic graph sequence. We know from previous mathematical work that all these puzzles have non-infinite solutions, but the values are profoundly huge, defeating almost any attempt to describe them. Even SCG(1) outweighs the colossal 19th entry of the Goodstein metasequence, which grows so fast that it outpaces the normal speed limit of arithmetic (see main feature).

Topics:



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *