It is often said that primitive societies do not have any words for large numbers; their counting systems go something like this: ‘one, two, three, many’. Such a list clearly implies that such a society would neither have encountered nor devised a system of money, because in a society with money few people would have been happy with knowing only that they possessed ‘a lot’ of it; they would have wanted to know rather more precisely how much they had, and this requires that words for larger numbers be devised.
By the time of the Roman Empire, the concepts of ‘hundreds’ and of ‘thousands’ would have been understood by merchants, bankers and bureaucrats, because there was a practical need for numbers this big in both commercial and military affairs. The concept of ‘a million’, on the other hand, may have been imagined by a few philosophers and mathematicians, but it would have remained unnamed for centuries because without any practical applications it was in effect the new ‘…many’.
However, in the fourteenth century, Middle English acquired the word ‘million’ from French, although it probably took until the advent of modern science in the seventeenth century for the word to acquire the precise meaning it has today in both French and English. And, nowadays, most people will have an instinctive grasp of how many items make ‘a million’, of how big it is: a very large but not an unimaginable number.
If you can imagine ‘a million’, then you should also be able to imagine ‘a million million’. Your only difficulty would be in deciding what to call the new number. I always called it ‘a billion’, but if you read my blog regularly you will have noted my occasional use of ‘a billion’ to mean one thousand million. I bow to necessity, because the American version, that one billion equals one thousand million, is now universally accepted. However, I deplore this kind of inflation, which forces you to run out of words for exceptionally large numbers twice as fast as if you’d stuck to the British way. On the other hand, I can see some merit in the system: regarding ‘a trillion’ as ‘a thousand thousand million’ goes some way towards explaining the apparent equanimity with which politicians in Washington view their country’s national debt of more than $15 trillion. But I do wonder if any of them now ask themselves whether it would have been wiser to stick with the British system, in which they could have been able to claim that the deficit was ‘only’ $15 billion.
Here is a question to which few outside the specialized world of mathematics will know the answer: what is the largest named number? It certainly isn’t measured in the trillions, as you will have probably already guessed. Quadrillions and subsequent words with similar prefixes are also very small in the context of this question, while ‘a zillion’, which I take to mean the largest number I can possibly imagine, isn’t a genuine number anyway.
So what of numbers with many more digits than those that have been discussed so far? In 1939, a nine-year-old boy in a New York kindergarten wrote a ‘1’ on the blackboard, followed by one hundred zeroes, and called this ‘a googol’. Because the boy’s uncle was a leading mathematician, the name stuck. However, we are still not in the numbers big league with the googol.
‘A googolplex’, a name devised by the aforementioned uncle, is ‘10’ multiplied by itself googol times. Now we really are up in the stratosphere. A goolgolplex is almost unimaginably large: it’s larger than the number of subatomic particles in the entire observable universe, yet it too pales into insignificance compared with the current record holder, which goes by the prosaic title ‘Graham’s number’.
Graham’s number, usually shortened to G, is so large that it is impossible to write using our conventional notation. Indeed, were the entire universe to be converted into paper and ink, there would not be enough of either to write it down. So how can such a gargantuan number be described? There follows an attempt at a non-technical summary.
We start with the convention that 3á3 = 3 × 3 × 3. In other words, the up-arrow tells you that the number to its left is to be multiplied by itself a number of times indicated by the number to the right of the arrow. Therefore, 3áá3 = 3á(3á3) = 3 multiplied by itself 27 times, or 7,625,597,484,987. So what about 3ááá3? This expression can be expanded to 3áá(3áá3), which is 3áá7,625,597,484,987. This, in turn, can be expanded to 3á(7,625,597,484,987á7,625,597,484,987), which would take a lifetime to calculate without the aid of a supercomputer.
With the next expression in the series, 3áááá3, we are contemplating a number of such mind-boggling size that it would not register in any meaningful way with the non-mathematical mind. Yet Graham’s number only starts here. Imagine the expression 3á…á3, in which the number of up-arrows is 3áááá3. Once this value has been computed, it becomes the number of arrows in the next iteration of the calculation. This process is continued through 63 iterations before we arrive at Graham’s number.
The first question to present itself is why there should be precisely 63 iterations. It does seem to be an arbitrary number. However, an even more obvious question is this: what is the use of such a number? It turns out to be useful in calculating the number of possible permutations in certain types of complex combinatorial problem, where it is seen as an upper limit to the range of possible answers. Well, yes, it probably is an upper limit to anything you can think of, but does that really make it useful? It’s like saying you have between none and ‘hundreds’ of parents, which is self-evidently silly. I’m just relieved that I’ve managed to keep infinity out of the discussion. Transfinite numbers, which record the number of items in infinite sets, must be larger than Graham’s number, but their values cannot be calculated. Thankfully. And the important numbers in everyday life can still be reckoned on the fingers of one hand. Thankfully.