If you haven’t heard of Collatz sequences or Goodstein sequences, I’ll introduce them shortly. The point I’m trying to get across in this post is to convey the remarkable properties of both sequences, while comparing the two, since the Collatz sequences can be redefined into ‘partial’ base bumping part and iterated division by 2, and Goodstein sequences, if you haven’t already seen them, combine a ‘complete’ base bump with a subtraction by 1.

Let’s start with Collatz sequences since they are easier to describe [harder to prove, opposite of Goodstein sequences :-)].

If one had a good calculator that could handle really large numbers, and say I gave you any positive integer(> 0) and then told you if the number is odd, you multiply it by 3 and add 1 to get a new number, if it is even, you divide it by 2. Then do the same to the new number, if odd again, multiply by 3 and add 1, if even again, divide by 2. To each new number, do this forever. This is a Collatz sequence, or a sequence of hailstone numbers.

Mathematically, the Collatz function C: \mathbb{Z} \rightarrow \mathbb{Z} is a piece-wise function:

    \[C(x) = \begin{cases} 3x+1 & \text{ if \emph{x} is odd} \\ \frac{x}{2} & \text{ if \emph{x} is even} \end{cases}\]



and C is used to define the sequence of hailstone numbers recursively:

\begin{cases} h_0 \in \mathbb{N}\setminus\{0\} \\ h_{n+1} = C(h_n) \text{ for } n \geq 0. \end{cases}

If you play enough with different initial values for h_0 you’ll notice that hailstone numbers can’t be tamed to your whims, they are volatile, chaotic, intractable, not followable. Despite the chaos though, it looks like the Collatz sequences for any initial value eventually reach 1, and if you try initial value 1 you’ll see that you get a cycle 1,4,2,1, therefore 1 cycles with itself, so once arrived at 1 the future behavior of the sequence is known. It has been verified computationally that all the Collatz sequences with initial value \leq 2^{68} eventually reach 1, this record being held as of late by David Bařina (thank you David). What about past 2^{68}? It is still an open question if all positive integers get to 1 starting as an initial value in the Collatz sequence. In fact, this is the notorious Collatz Conjecture, which I’m probably supposed to warn you about not working on naively thinking it has a simple solution, it’s fun to play in the sandbox and try to gain a unique intuition about the problem, but as far as tools, there aren’t any great tools at the moment to tackle the problem. Most likely some new tools are going to have to be invented, or maybe a computer scientist will solve it without even knowing they did. 

\textbf{The Collatz Conjecture:}

Does every Collatz sequence eventually reach 1, for any positive integer as initial value?

The usual example you’ll most likely see to demonstrate the complexity of Collatz sequences is initial value 27, in fact, this is the example on Wikipedia:

27, 82, 41, 124, 62, 31, 94, 47, 142, 71, 214, 107, 322, 161, 484, 242, 121, 364, 182, 91, 274, 137, 412, 206, 103, 310, 155, 466,
233, 700, 350, 175, 526, 263, 790, 395, 1186, 593, 1780, 890, 445, 1336, 668, 334, 167, 502, 251, 754, 377, 1132, 566, 283, 850,
425, 1276, 638, 319, 958, 479, 1438, 719, 2158, 1079, 3238, 1619, 4858, 2429, 7288, 3644, 1822, 911, 2734, 1367, 4102, 2051, 6154,
3077, 9232, 4616, 2308, 1154, 577, 1732, 866, 433, 1300, 650, 325, 976, 488, 244, 122, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160,
80, 40, 20, 10, 5, 16, 8, 4, 2, 1

which requires 111 steps before first arriving at 1. Starting with 27, which is odd, thus we multiply it by 3 and add 1 to it to get 3(27)+1 =82, and then because 82 is even we divide by 2 to get 41, and the sequence continues as such. 27 is an interesting example, because it take a ‘really’ long time to get to 1 relative to the positive integers less than 27:

1 or 1,4,2,1, 1 is already at 1 so zero steps are needed to get to 1
2,1
3,10,5,16,8,4,2,1
4,2,1
5,16,8,4,2,1
6,3,10,5,16,8,4,2,1
7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
8,4,2,1
9,28,14,7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
10,5,16,8,4,2,1
11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
12,6,3,10,5,16,8,4,2,1
13,40,20,10,5,16,8,4,2,1
14,7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
15,46,23,70,35,106,53,160,80,40,20,10,5,16,8,4,2,1
16,8,4,2,1
17,52,26,13,40,20,10,5,16,8,4,2,1
18,9,28,14,7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
19,58,29,88,44,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
20,10,5,16,8,4,2,1
21,64,32,16,8,4,2,1
22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
23,70,35,106,53,160,80,40,20,10,5,16,8,4,2,1
24,12,6,3,10,5,16,8,4,2,1
25,76,38,19,58,29,88,44,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1
26,13,40,20,10,5,16,8,4,2,1

as you can see, positive integers less than 27 require at most one line of text to get to 1, not 4-5.

Is it necessary that every number in these sequences be noted? For example, was it necessary to note 82 in between 27 and 41? For any odd initial value, because 3 and 1 are odd, 3(odd)+1 = (odd)\cdot(odd)+(odd) = (odd)+(odd) = (even) thus we could divide by 2 after multiplying our odd input by 3 and adding 1. This describes Collatz sequences using the T: \mathbb{N}\setminus \{0\} \rightarrow \mathbb{N}\setminus \{0\} function, a function well known in the literature on Collatz, in fact some define the Collatz Conjecture using T:

    \[T(x) = \begin{cases} \frac{3x+1}{2} & \text{ if \emph{x} is odd} \\ \frac{x}{2} & \text{ if \emph{x} is even} \end{cases}\]



the sequence of hailstone numbers defined on T 

\begin{cases} h_0 \in \mathbb{N}\setminus\{0\} \\ h_{n+1} = T(h_n) \text{ for } n \geq 0. \end{cases}

with initial value h_0=27 requires 70 steps before getting to 1:

27,41,62,31,47,71,107,161,242,121,182,91,137,206,103,155,233,350,175,263,395,593,890,445,668,334,167,251,377,
566,283,425,638,319,479,719,1079,1619,2429,3644,1822,911,1367,2051,3077,4616,2308,1154,577,866,433,650,
325,488,244,122,61,92,46,23,35,53,80,40,20,10,5,8,4,2,1.

70 steps is less than 111, but still a lot, and T is not the only acceleration out there. Another acceleration commonly used in the literature is the Syracuse iterates, only considering the odd elements of the sequence. Once an even number is reached, all of the factors of 2 are removed to get an odd number. The Syracuse function is defined only on the odd positive integers \text{Syr}: 2\mathbb{N}+1 \rightarrow 2\mathbb{N}+1. The definition of the Syracuse iterates trades the ‘ugliness’ of a piece-wise function, for the ‘ugliness’ of having a variable number of divisions by 2 at each step:

\text{Syr}(x) = \frac{3x+1}{2^{v_2(3x+1)}},

where v_2 is the 2-adic valuation of the numerator 3x+1, and 27 need 42 steps to get to 1:

27,41,31,47,71,107,161,121,91,137,103,155,233,175,263,395,593,445,167,251,377,283,425,319,479,719,1079,1619,
2429,911,1367,2051,3077,577,433,325,61,23,35,53,5,1.

I really liked how clean the Syracuse iterates worked, because if you somehow were able to figure out how many powers of 2 were in an even number you arrived at, you could just cancel them out. I wondered if something similar held on the odd numbers, and I thought it would be an impressive feature of Collatz sequences if such a clean acceleration existed on the odd numbers. At the time I was surprised to find that 2^{\pi}\cdot \theta -1 \mapsto 3^{\pi}\cdot \theta -1 for \pi,\theta \in \mathbb{N} and \theta odd, by using induction with T, which ends up being a generalization of the fact that T fixes -1, T(-1) = -1, but if you really give it some thought it’s not so surprising at all. I think this fact, or even a more general version of it has been known since the year 2000(Some Results on the Collatz Problem by Andrei, Kudlek & Niculescu). More intuitively, if you examine graphs of hailstone numbers, if you have an ‘interesting’ initial value like 27, you should have many what look like ‘spikes’, and since I used T, this acceleration on the odd numbers corresponds to the lime/green portions, or increasing parts of the picture, the red parts are the iterated division by 2 which correspond to the Syracuse acceleration:

yes, I did colour the ‘increasing’ parts green and the ‘decreasing’ parts red to resemble bitcoin prices or stock market prices. ©

In some sense the ‘increasing’ parts and ‘decreasing’ parts of Collatz sequences defined with T are ‘predictable’ if you knew the ‘factorization’ of the form 2^{\pi}\cdot\theta -1 or 2^{\pi}\cdot\theta for \pi,\theta \in \mathbb{N} and \theta odd. I agree, perhaps the time you save with the factorization, because the mapping is then directly 2^{\pi}\cdot\theta -1 \mapsto 3^{\pi}\cdot\theta -1 or 2^{\pi}\cdot\theta \mapsto \theta, is payed by the time taken to find the factorization, but nevertheless, if you knew it, you’d know to which even number you’d be arriving at(for increasing) or to which odd number(for decreasing).
As far as I know, there is no such acceleration which combines these odd and even accelerations together. This corresponds to the yellow lines connecting consecutive minima on the graph, or ‘cutting the spikes.’

The Montreal iterates cut the spikes on the graph
Brendon James Thomson ©

At first I was inspired by Economics in naming this combination of accelerations, I thought maybe ‘net-profits’ or the ‘sequence of net-profits,’ or maybe ‘archimedeans’ since we’re connecting consecutive minima, i.e. taking the shortest distance between these two points, but I settled on ‘Montreal iterates’ since after all they are a twist/add-on of the Syracuse iterates and if you look on a map, Syracuse and Montreal are a short road-trip and border crossing apart. 

27 only needs 17 let’s say ‘super-steps’ to get to 1 in terms of Montreal iterates: 

27,31,121,91,103,175,445,167,283,319,911,577,433,325,61,23,5,1

Mathematically, the Montreal function \text{Mtl} : 2\mathbb{N}+1 \rightarrow 2\mathbb{N}+1 is also only defined on odd positive integer inputs, and

\text{Mtl} : 2^{\pi}\cdot\theta -1 \mapsto \frac{3^{\pi}\cdot\theta -1}{2^{v_2(3^{\pi}\cdot\theta -1)}} 

where \pi,\theta \in \mathbb{N} and \theta is odd and v_2(3^{\pi}\cdot\theta -1) is the 2-adic valuation of 3^{\pi}\cdot\theta -1.

This concludes the introduction of the Collatz sequences, the Collatz Conjecture, and the Montreal iterates. Now to introduce the Goodstein sequences.

I heard in a podcast once that Elon Musk was obsessed with exponential functions, and studying them, I wonder if Elon Musk knows about Goodstein sequences? Okay, maybe I’m throwing a little bit of shade at Elon, but it is a genuine curiosity. It would be pretty cool if he did know about them :-). 

To describe Goodstein sequences, first, the concept of ‘complete base-b notation’ or ‘hereditary base-b notation’ needs to be introduced. Much like you can write a positive integer in base-notation, for a positive integer \geq 2,

a_n\cdot b^n + a_{n-1}\cdot b^{n-1} + \cdots + a_1\cdot b + a_0 where a_n \not = 0 and 0 \leq a_i < b for 0 \leq i \leq n-1

‘complete base-b notation’ or ‘hereditary base-b notation’ does so to the exponents, and then the exponents of the exponents as well, and then the exponents of the exponents of the exponents, until the process can’t go any further. What does that really mean? Take a particular example, the typical example I’ve seen is 266, let’s start in base-2:

266 = 256 + 10 = 256 + 8 + 2 = 2^8 + 2^3 + 2^1

now we notice that the exponents are 8, 3, 1, and these exponents in base-2 are 8 = 2^3, 3 = 2^1 + 1, 1=1, thus:

266 = 2^8 + 2^3 + 2^1 = 2^{2^3} + 2^{2+1}+ 2^1

and the exponent of the exponent 8=2^3 is 3=2+1, which terminates the process because we can’t reduce elements in the ‘power-towers’ further in base-2:

266 = 2^{2^3} + 2^{2+1}+ 2^1 = 2^{2^{2+1}} + 2^{2+1}+ 2^1.

Goodstein sequences are defined with any natural number as initial value in complete base-2 notation (I know the name ‘hereditary base-b notation’ is more popular, but I’m more familiar with and have used more the former, so I’m sticking with it) and then the base b is bumped to up by 1, and the resulting number is subtracted by 1. The new number is rewritten in complete base-(b+1) notation, and the process restarts, bump the base, subtract by 1. If a Goodstein sequence reaches 0 it is defined to be 0 forever afterwards. Keeping with 266:

266 = 2^{2^{2+1}} + 2^{2+1}+ 2^1 \mapsto

3^{3^{3+1}} + 3^{3+1}+ 3^1\mathbf{-1} = 3^{3^{3+1}} + 3^{3+1}+ 2

The resulting number is then written in complete base-3 notation, in the above case it is already written in complete base-3 notation, and we do this again, bump the base by 1 and subtract the resulting number by 1:

4^{4^{4+1}} + 4^{4+1}+ 2\mathbf{-1} = 4^{4^{4+1}} + 4^{4+1}+ 1

and so on, and so on:

5^{5^{5+1}} + 5^{5+1}+ 1\mathbf{-1} = 5^{5^{5+1}} + 5^{5+1}

6^{6^{6+1}} + 6^{6+1}\mathbf{-1} = 6^{6^{6+1}} + 5\cdot 6^6 + 5\cdot 6^5 + 5\cdot 6^4 + 5\cdot 6^3 + 5\cdot 6^2+ 5\cdot 6^1 + 5

7^{7^{7+1}} + 5\cdot 7^7 + 5\cdot 7^5 + 5\cdot 7^4 + 5\cdot 7^3 + 5\cdot 7^2+ 5\cdot 7^1 + 5 \mathbf{-1} =
7^{7^{7+1}} + 5\cdot 7^7 + 5\cdot 7^5 + 5\cdot 7^4 + 5\cdot 7^3 + 5\cdot 7^2+ 5\cdot 7^1 + 4
\textbf{.}
\textbf{.}
\textbf{.}

Intuitively, one would think that the base bumping is always greater than the subtraction by 1 and that these sequences diverge exponentially. Counterintuitively, Goodstein’s Theorem proves that all Goodstein sequences converge to 0. The tiny subtraction by 1 does eventually overtake the base bumping! Beautiful! The proof has to do with ordinals, and it is just as beautiful as the result itself in it’s cleverness and simplicity, I think I have a rough-sketch in my mind of understanding the proof, but I would like to refine it some more before talking about it further. There are many resources available online, Stephen G. Simpson’s Unprovable Theorems and Fast-Growing Functions was the easiest for me to understand.

That defines Goodstein sequences (I’ve avoided the cumbersome notation usually used in describing Goodstein sequences mathematically) combined with Goodstein’s Theorem, the remarkable result that all these sequences converge to 0. 

Abstractly, Collatz sequences using Montreal iterates, have increasing part 2^{\pi}\cdot \theta - 1 \mapsto 3^{\pi}\cdot \theta - 1, a ‘partial’ base-bump, and decreasing part iterated division by 2. Before I discovered Goodstein sequences for myself, I was wondering at the time if there were other example of sequences that abstractly had ‘up-forces’ that fought with ‘down-forces’ and if the down forces also eventually won causing these sequences to eventually decrease because that is what we seem to observe with Collatz. Goodstein sequences can be an extreme example of this, if one has a fairly large power-tower x^{x^{x^{x}}} then base-bumping would be an extreme leap up or ‘up force’, while the ‘down force’ for Goodstein sequences is humble and unassuming, simply take away 1. The reason why I say can is because if a number written in base-b is just b, then a base bump and subtraction by 1 leaves it unchanged (b+1)-1 = b, but now were are in base b+1 and thus would begin the decrease to 0 if we kept base-bumping(we don’t, the base-bumping is nullified and absent because we just have the constant b<b+1) and subtracting by 1: b-1, b-2, ..., 0. I wondered specifically if there was a sequence with increasing part in the form of a base-bump and a decreasing part similar to division by 2, and if it would be known as fact that it would be eventually decreasing. Goodstein sequences are just that in terms of increasing base-bumping part, but as for decreasing part, I would have to weaken my criterion because division by 2 is stronger than subtraction by 1, but after all, division is a generalization of subtraction so we didn’t completely fall off track, and Goodstein’s Theorems proves that they are all eventually decreasing and converge to 0. Great! There was a catch though, Goodstein’s Theorem isn’t provable using the Peano axioms of arithmetic. At the time I was pondering the unprovability of Collatz, so I thought even better! Maybe that might lend itself to why Collatz is so hard to solve… 

When comparing Goodstein sequences and Collatz sequences, heuristically, since we are only doing a ‘partial’ base-bump, and always only from 2 to 3, for Collatz with Montreal iterates 2^{\pi}\cdot \theta - 1 \mapsto 3^{\pi}\cdot \theta - 1(\theta is unchanged) and then dividing by 2 at least once, the ‘up forces’ of Collatz are weaker than the ‘up forces’ of Goodstein which has a complete increasing base-bumping function, 2 to 3, 3 to 4, 4 to 5 and so on, and the ‘down forces’ of Collatz, which can vary between super-steps since they are iterated division by 2, are stronger than the ‘down forces’ of Goodstein which is the humble subtraction by 1. Using these heuristics, Collatz sequences ‘should’ converge to 1 before Goodstein sequences converge to 0. I think it is interesting to note that Goodstein sequences have a built-in stabilizer, i.e. once they reach 0 they are defined to be the constant function at 0 forever, Collatz sequences on the other hand have an in-built stabilizer that prevents them from decreasing further, namely that 1 cycles with itself. Interestingly enough, using Montreal iterates forces the Collatz sequences to be the constant function at 1 once they reach 1, since 1\mapsto 1 through the Montreal iterates. 

In a way Goodstein sequences have everything we wish we had for Collatz sequences. I recently did a talk at the CUMC, and I defined a stopping-time function in terms of the Montreal iterates called helper. I created the function using python code, and it ‘helps’ you find the stopping time of a number by having the computer do the heavy-lifting, hence the name helper. If you graph helper, it looks like chaos contained under some undefinable logarithmic function, here is an example of helper plotted for all odd positive integers less than 2^{10}

Brendon James Thomson ©

Goodstein sequences have a corresponding stopping-time function, the Goodstein function \mathcal{G}: \mathbb{N}\rightarrow \mathbb{N}, and unlike the helper function there is a formula to calculate the stopping time(or length) of every Goodstein sequence. The Goodstein function \mathcal{G} is known to grow extremely rapidly, i.e. it takes Goodstein sequences a really long time to get to 0, for example \mathcal{G}(3)=6 while \mathcal{G}(4) = 3\cdot 2^{402653211}-2. It is also known how Goodstein sequences behave on their way to 0, they grow exponentially, and then become flat/constant functions for a period of time before making their decent linearly to 0 (they get to some number written in complete base-B notation B+n for n<B). Collatz sequences on the other hand are also called sequences of hailstone numbers, because the numbers in the sequences look like hailstones rising and falling in the clouds before eventually falling to the ground, and then bouncing on the ground forever. We don’t know what is going on with Collatz sequences while they appear to be on their way to 1. 

The most interesting thing I think Goodstein sequences have that Collatz sequences with Montreal iterates don’t is the lack of concern about the right ‘factorization’ (Goodstein sequences require making sure you are in complete base-b notation at each step before performing the base bump and subtraction by 1, Collatz sequences with Montreal iterates require writing a number in the form 2^{\pi}\cdot \theta -1  for \pi,\theta \in \mathbb{N} and \theta odd, and then knowing the 2-adic valuation of 3^{\pi}\cdot \theta -1 which when combined can be computationally expensive) at each step. Considering the example above (266), past 7, do we really know what the complete base-8 notation of the next number of the sequence? Sure we could always compute it, but as we continue to advance with the sequence, what about the complete base-9 notation at the next step, or the complete base-10 notation at the step after that, or the complete base-11 notation after that, or the complete base-12 notation where the situation is a little hairier because there is no constant anymore to cushion the calculation? The proof Goodstein’s Theorem bypasses all these local difficulties by creating a corresponding decreasing sequence of ordinals, where both sequences correspond at 0. I wont get into the details here. If you were to take note of every detail when computing Collatz sequences with Montreal iterates, it would quickly become overwhelming, especially if you were trying to find a chink in the armor to see why they are eventually decreasing, it is difficult enough trying to find out how many times you have to divide 3^{\pi}\cdot \theta -1 by 2, which makes it very difficult to guess what the next number is. Say I start with 27 = 2^2 \cdot (7) -1, then we know that 2^2 \cdot (7) -1\mapsto 3^2 \cdot (7) -1, I know that 3^2 \cdot (7) -1 is even, but unless I perform the calculations 9\cdot 7 -1 =62 = 2\cdot 31, only then would I know it is divisible by 2 once. (Sure, you could jot down 3^{\pi}\cdot \theta -1 for various values of \pi and odd positive integers \theta and try to find a pattern, but it is not so easy). 

That being said,

\textbf{At what point do Collatz sequences and Goodstein sequences converge?}

Especially given that Goodstein sequences take such a long time to get to 0, they should really give Collatz sequences more than enough time to converge to 1, if they do, and if they don’t then maybe insight into this question will perhaps illuminate the question of if there are divergent Collatz sequences, or cycles other than 1,4,2,1 (or 1,2,1 using T).

Goodstein sequences could be redefined to be the constant function at 1 forever afterwards once arriving at 1. Thus if we start with the same initial value, Goodstein sequences defined to be 1 forever instead of 0, and Collatz sequences with Montreal iterates, eventually describe the same sequence, the constant function at 1, if all Collatz sequence converge to 1. Goodstein sequences would act like a predictable cap over Collatz sequences (Collatz sequences always under Goodstein sequences, only converging when they are both constant functions, hopefully, but Goodstein sequences grow so wildly in the beginning that they should rapidly reach heights unattainable by Collatz and then linger there for a really long time giving Collatz enough time to settle), and allow us to bypass the randomness of Collatz sequences, much how the proof of Goodstein’s Theorem using ordinals and their properties to bypass the difficulties of Goodstein sequences themselves.

Brendon James Thomson ©




Leave a Reply

Your email address will not be published. Required fields are marked *