
Hello! I have developed a fantastic identity.
I don't know if it's an existing work, but this identity is independently developed by me.

I don't know if it's an existing work, but this identity is independently developed by me.
Came up with this one for fun, no idea if it's been posted before somewhere. Fair warning, I'm not amazing at math, just got curious about this one and worked through it slowly. Mostly wanted to share because I liked how a silly real-world setup ended up landing right on top of φ(n).
The setup
In an online poll, viewers vote either "Yes" or "No," and the result is displayed only as a percentage rounded to exactly two decimal places (e.g., 41.27%). The total number of votes is not shown. Assume that for any percentage displayed, the actual vote tally is the minimum possible whole number of votes that could have produced that exact percentage.
The question
Out of all possible displayed percentages (from 00.01% to 99.99% in steps of 0.01%), how many of them require the full 10,000 voters as a minimum? And which displayed percentages are those, intuitively?
>!Where coprimality comes in!<
>!A displayed percentage X.XX% corresponds to the fraction XXXX/10000. The minimum number of voters needed to produce that exact ratio is 10000 / gcd(XXXX, 10000). So the minimum hits its maximum (10,000) exactly when gcd(XXXX, 10000) = 1, i.e., when the numerator is coprime to 10,000.!<
>!Two numbers are coprime when they share no prime factors. Since 10,000 = 2⁴ × 5⁴, its only prime factors are 2 and 5. So XXXX is coprime to 10,000 if and only if XXXX is odd AND not divisible by 5. That's a clean shortcut, you don't have to actually factor the numerator at all, you just check the last digit.!<
>!Where Euler's totient comes in!<
>!The count of integers from 1 to n that are coprime to n is exactly Euler's totient function φ(n). For n = 10,000:!<
>!φ(10000) = 10000 × (1 − 1/2) × (1 − 1/5) = 10000 × 0.5 × 0.8 = 4,000!<
>!So exactly 4,000 displayed percentages require the full 10,000 voters as a minimum. That's 40% of all possible X.XX displays.!<
>!The pattern generalizes nicely. If you display to d decimal places, the max minimum is 10^(d+2), and the number of splits tied at that max is φ(10^(d+2)) = 0.4 × 10^(d+2). Always exactly 40%, because the prime factorization of any power of 10 only involves 2 and 5, and (1 − 1/2)(1 − 1/5) = 0.4.!<
>!The part I thought was nice!<
>!The reason the answer is always 40% (regardless of how many decimal places you display) is that 10 only has two prime factors. If we counted in some weird base where the denominator had more prime factors, the proportion of "hardest" splits would drop. The fact that our base-10 display gives such a clean answer is a small accident of the base we count in.!<
Curious if anyone sees a slicker way to frame the general result, or if there's a related problem I should look at. Also happy to be told this is a well-known exercise and I just reinvented it.
I've rewritten this StackEchange posting from a few years ago, making the results more rigorous (although it's certainly not 100% rigorous yet). As explained there, the starting point is the idea that the sum of a series, regardless of whether it is convergent or divergent, should be taken to be the sum of the partial sum and the remainder term.
In case of a convergent series, the remainder term tends to zero in the limit of the truncation point to infinity, which allows us to compute the sum of such a series without having to consider the remainder term. In case of a divergent series, we then do need to consider the remainder term.
While the remainder term looks like something that is completely arbitrary, I show in section 3 of the stackexchange posting that the remainder term for the rescaled summand is related to that of the original summand, see eq. (3.11). I derived this for the convergent case, but by invoking analytic continuation, I argue that this should be generally valid.
If we're summing f(k) from k = p to infinity, we can consider summing f(k/N). The remainder term for truncating at the argument of the summand of x is denoted by R(x,N). This means that the index value at which we're truncating is N x. We then do have invoked analytic continuation to any real or complex values for x.
Eq (3.11) then says that:
R(x,1/N) = sum from k = 1 to N of R(x + k/N -1)
Where the remainder term in the summation without the second argument is the original remainder term with N = 1.
I then show in section 4 that this relation directly implies the value of the sum over all positive integers.
More powerful summation methods are derived in section 5 from (3.11) by considering the limit of N to infinity. One result is eq. (5.5) which gives the sum X of a divergent series in terms of an integral over the partial sum S(t):
X = Constant term in the large-x expansion of Integral from x -1 to x of S(t) dt
And another result is eq. (5.6) which gives the prescription of how to correctly use regularization to compute the value of divergent series. We're then summing a summand f(k) that leads to a convergent summation with value X, and they both depend on another parameter. By doing some manipulations involving that parameter, be it analytic continuation, or series expansions or something else, one formally gets to the desired divergent sum.
However, eq.(5.6) tells us that to get to the correct value of the divergent sum, one has to also consider the integral of f(t) from x to infinity, do whatever is done to the regularized series to this integral, extract the constant term of the large-x expansion from this and subtract that from the result of the manipulations to the regularized sum.
In section 6 I give some examples of computations involving (5.5) and (5.6). And I've given more examples of how doing the regularization correctly resolves ambiguities in other postings. See e.g. this MathOverflow posting and in this posting I show how it eliminates an ambiguity with choosing the branch of a logarithm.
This is a repost from r/math since I don't use reddit I can't post there. I think this is the most appropriate sister thread.
So a few years ago I noticed a pattern about differences of squared numbers. However, I failed to find anything about it. It just popped into my head again, and I am not conceited enough to think I invented 'new math' or whatever. So someone tell me this is a thing and I am just ignorant.
The concept goes as follows... the difference between the additive amounts of squared numbers is always two more than the last. At least when moving up integers. When moving down it decreases by 2. This is at the exclusion to 0^2.
Exemplified as follows:
1^2 | 2^2 | 3^2 | 4^2 | 5^2 |
1 4 9 16 25
+3 +5 +7 +9
+2 +2 +2
If what I put above is readable see how the difference of 3^2 (9) and 4^2 (16) is 7, then the difference of 4^2 (16) and 5^2 (25) is 9. Then notice how the difference of 7 and 9 is 2. And how it is always 2 between adjacent sets of squared results. This pattern goes on for as far as I checked.
Title: Conjecture on repeated multiplication and eventual convergence to integers
I've been investigating a pattern regarding repeated multiplication and decimal convergence. After testing over 1,000 cases, I haven't found a counter-example yet and I’m looking for a mathematical explanation or a case where this fails.
The Process:
Take two decimal numbers, a and b. Multiply a by b, then take the result and multiply it by b again. This continues in a feedback loop (a_{n+1} = a_n \times b).
The Rules:
To avoid trivial cases or infinite loops, I’ve set these constraints:
The first and last digits of both numbers cannot be 1.
The first and last digits of a must be different from the first and last digits of b to avoid symmetry.
The Observation:
No matter how many digits there are after the decimal point, the results eventually "explode" in scale and, in every test I've run, eventually become a perfect integer (whole number). Even with complex decimals like a = 0.455433015 and b = 5.314541154, the decimal tail eventually clears.
The Question:
Is there a known theorem that explains why these types of sequences would consistently hit an integer? Since the numbers grow exponentially, it becomes difficult to track with standard floating-point precision on a home setup. I’m interested to know if this has a specific name in number theory or if someone can identify a specific pair that would refuse to converge to a whole number.
The supersignum unit g is defined as a bridge between hyperbolas and circles, its chaotic set or unit that i made, it starts with i, a concept everyone knows, then i²=-1, then we suddenly get j, a hyperbolic number where j²=1, but g²=±1, lets see their powers
i²=-1
So
i³=-i
This may look weird but its part of the plan
i⁴=1
Its a full rotation!
Now j
j²=1
1×j=j³=j
That was a fast loop
Now g
g=g
g²=±1
g³=±g (logically)
But whats g⁴?
±1×±1=1 so g⁴=1
And ±g×g=1 may look weird, but its normal, ±1×g×g,±1×±1, see! We get the same result
So lets find what set is g
g²={-1,1}
we take the square root and assume √1=j since j²=1 as an soloution
√g² take root
√g²={√-1,√1}
g={i,j}
Wow!
Extra : if we encounter an i during the i path and j says the same, for example (iπ)/2 and (jπ)/2, we can say (gπ)/2 in ln(g) because it happens
Dicoilic numbers: this is where the fun begins, its not supersignum numbers, but it has 3 dimensions
A dicoilic number is a number a+bw+cs
|a+bw+cs|=√|a²+b²c|
You can do stuff with dicoilic numbers
Dicoilic numbers are a+bw+cs
W and s are not regular units and 0s≠0 to prevent epsilon=w
Lets start with a few stuff
i=w+s
j=w-s
epsilon=w±0s (+ and - are interchangeable)
Lets find the hypercomplex unit k
We know k=ij
That means (w+s)(w-s)
That means k=w²-s²
We cant exactly find w² and s² but it does have some algebra
i²=w²+s²+2sw
i²=(w+s)²=w²+s²+2sw
i=w+s
j=w-s
This system is communitave
That means the hypercomplex unit k
k=ij
k=(w+s)(w-s)
k=w²-s²
j²=w²+s²-2sw
This means j²+i²=0
And w²+s²=0
Whaaat
w²=-(s²)
Amd i²-j²=-2 or 4ws
That means 4ws=-2 divide
2sw=-1
Lets check if this is consistent
I²=s²+w²+2sw
S²+w² is 0
i²=2sw
2sw=-1
CONSISTENT!
And check for j
j²=1
j²=w²+s²-2sw
j²=0-2sw
j²=0-(-1)
j²=1
LOL
In dicoil numbers, there is a concept called dicoilic form, every hypercomplex number and imaginary can be expressed in a dicoilic form
i=w+s
j=w-s
k=w²-s²
Epsilon=w+0s
If we want to take the dicoilic form of , say, 1+i, we put the real part down first
1
Then we take the number
i=w+s
Then we get 1+w+s
For example, you can turn the perimeters of inscribed triangle, square, pentagon and hexagon into an approximation of pi better than the perimeter of a 360-gon !
Monogon, bigon and non integer value of n can also be used. p(n) = n sin ( pi / n ).