Title: Conjecture on repeated multiplication and eventual convergence to integers
I've been investigating a pattern regarding repeated multiplication and decimal convergence. After testing over 1,000 cases, I haven't found a counter-example yet and I’m looking for a mathematical explanation or a case where this fails.
The Process:
Take two decimal numbers, a and b. Multiply a by b, then take the result and multiply it by b again. This continues in a feedback loop (a_{n+1} = a_n \times b).
The Rules:
To avoid trivial cases or infinite loops, I’ve set these constraints:
The first and last digits of both numbers cannot be 1.
The first and last digits of a must be different from the first and last digits of b to avoid symmetry.
The Observation:
No matter how many digits there are after the decimal point, the results eventually "explode" in scale and, in every test I've run, eventually become a perfect integer (whole number). Even with complex decimals like a = 0.455433015 and b = 5.314541154, the decimal tail eventually clears.
The Question:
Is there a known theorem that explains why these types of sequences would consistently hit an integer? Since the numbers grow exponentially, it becomes difficult to track with standard floating-point precision on a home setup. I’m interested to know if this has a specific name in number theory or if someone can identify a specific pair that would refuse to converge to a whole number.