Interpolation of Sines by Successive Approximation: Ancient Indian Mathematics You Probably Never Heard Of
Most people associate trigonometry with Greece — Hipparchus, Ptolemy, the chord tables. But there's a parallel and equally sophisticated tradition of trigonometric computation that developed in India, largely unknown outside specialist circles. What makes this tradition particularly fascinating is not just that Indian astronomers built sine tables, but that they developed genuinely clever iterative numerical methods for interpolating between tabular values — methods whose convergence can be rigorously proven using modern analysis. We're talking about work done in the 12th and 17th centuries that anticipates ideas central to numerical analysis today.
Let me walk you through the whole thing from scratch.
Background: What Is a "Rsine" and Why Did Indians Care?
Indian astronomers worked with a quantity called jyā, which is what we'd call R·sin(θ) — the sine scaled by a radius R rather than normalized to 1. The word jyā literally referred to the chord of a bow, and you can see why: if you draw a circular arc like a bow, the straight line (half-chord) across it is the sine. The cosine counterpart was called koṭijyā, given by R·cos(θ). There was also a versed sine called utkramajyā, defined as R − R·cos(θ).
Working with R·sin(θ) instead of sin(θ) kept everything in integers or near-integers when R was chosen appropriately (like R = 120 or R = 3438), which mattered enormously before decimal notation was standard. These weren't just abstract mathematical curiosities — they were essential for astronomical calculations: predicting planetary positions, computing eclipses, determining calendar dates. Accuracy in these computations had religious and administrative significance.
So Indian mathematicians constructed tables of jyā values at regular angular intervals — typically every 10 degrees or every 3.75 degrees — and then faced a practical problem: what do you do when the angle you actually need falls between two tabular entries? This is the interpolation problem, and it's where things get interesting.
The Basic Setup and Notation
Let's establish the framework carefully, because the notation matters for understanding the methods.
Let h be the tabular arc bit in degrees. Bhāskarācārya (12th century) used h = 10°. The tabular arcs are then kh for k = 0, 1, 2, 3, ..., l, where l = 90°/h.
The tabular Rsines are denoted J_k = R·sin(kh) for k = 0, 1, 2, ..., l.
The tabular Rsine differences are:
ΔJ_k = J_{k+1} − J_k = R·sin((k+1)h) − R·sin(kh) for k = 0, 1, 2, ..., l−1
Now suppose you want the Rsine at some angle α that falls between two tabular points. Specifically, α lies in the interval [qh, (q+1)h] for some integer q. Write:
α = qh + ρ, where 0 < ρ < h
Here q is the quotient and ρ is the remainder when α is divided by h. The fractional penetration into the interval is θ = ρ/h, so 0 < θ < 1.
The tabular Rsine just before α is J_q = R·sin(qh), and the one just after is J_{q+1} = R·sin((q+1)h).
The tabular difference just before the interval (the "foregoing" or gata difference) is:
d_b = ΔJ_{q−1} = J_q − J_{q−1}
The tabular difference just after the interval (the "ensuing" or bhogya difference) is:
d_a = ΔJ_q = J_{q+1} − J_q
The naïve linear interpolation would say: R·sin(α) ≈ J_q + (ρ/h)·d_a. But this is crude. The Rsine differences are not constant — they're decreasing (because the second derivative of sine is negative). So using d_a directly overestimates the true difference applicable in the interior. Both Indian schools recognized this problem and attacked it differently.
Bhāskara's Rule: The First Correction
Bhāskarācārya's rule for refining the functional difference is stated in a Sanskrit verse (śloka) that translates roughly as:
"Divide the product of the residual arc's degrees (śeṣāṃśa) and the difference between the foregoing and ensuing tabular differences by twenty (nakha). This subtracted from or added to half the sum of foregoing and ensuing tabular differences will be the refined (sphuṭam) ensuing difference for finding Rsine or versed Rsine (kramajyā or utkramajyā) respectively."
In modern notation, the refined difference d applicable in the interior of the tabular interval is:
d = (1/2)(d_b + d_a) − (d_b − d_a)·(ρ/20) for kramajyā (Rsine)
d = (1/2)(d_b + d_a) + (d_b − d_a)·(ρ/20) for utkramajyā (versed Rsine)
The "20" appearing here comes from the arc bit h = 10° used by Bhāskara: the factor is ρ/h (with ρ measured in degrees when h = 10°), and the 20 is actually 2h = 2×10 = 20. More precisely, the formula is:
d = (1/2)(d_b + d_a) − (d_b − d_a)·(ρ/2h)
Once this refined difference d is found, the desired Rsine is:
R·sin(α) = J_q + (ρ/h)·d
This is genuinely a second-order interpolation formula. Bhāskara is not just taking the average of the two neighboring differences (which would give the midpoint difference); he's also correcting that average by an amount proportional to how far into the interval the argument lies, scaled by how rapidly the differences are changing. This is essentially Newton's forward difference interpolation carried to the second order, arrived at by geometric reasoning rather than calculus.
The roots of Bhāskara's approach go back to 7th-century astronomer Brahmagupta, who had given a similar correction rule in his Khaṇḍakhādyaka.
Verifying Bhāskara's Formula in Modern Terms
Let's actually verify this using modern Taylor expansions to see why the formula works and what approximation it corresponds to.
Let H = πh/180 (the arc bit in radians). Then:
cos(h degrees) = cos(H radians) = 1 − H²/2! + H⁴/4! − ...
Therefore: 1 − cos(h degrees) = H²/2! − H⁴/4! + ... ≈ H²/2, up to order H²
Similarly: sin(h degrees) = H − H³/3! + H⁵/5! − ... ≈ H, up to order H²
Now compute ΔJ_{q−1} = J_q − J_{q−1} = R·sin(qh) − R·sin((q−1)h)
= R[sin(qh) − sin(qh)·cos(h) + cos(qh)·sin(h)]
= R(1 − cos(h))·sin(qh) + R·sin(h)·cos(qh)
Substituting the approximations:
ΔJ_{q−1} ≈ R·(H²/2)·sin(qh) + R·H·cos(qh)
In the same manner:
ΔJ_q ≈ −R·(H²/2)·sin(qh) + R·H·cos(qh)
Therefore:
(ΔJ_{q−1} + ΔJ_q)/2 ≈ R·H·cos(qh)
and
ΔJ_{q−1} − ΔJ_q ≈ R·H²·sin(qh)
Now let P = πρ/180 (the residual arc in radians). Then:
sin(ρ degrees) ≈ P (up to order P²) cos(ρ degrees) ≈ 1 − P²/2 (up to order P²)
The desired Rsine:
R·sin(α) = R·sin(qh + ρ) = R·sin(qh)·cos(ρ) + R·cos(qh)·sin(ρ) ≈ (1 − P²/2)·R·sin(qh) + P·(R·cos(qh)) = R·sin(qh) + P·(R·cos(qh)) − (P²/2)·R·sin(qh) = R·sin(qh) + (P/H)·(R·H·cos(qh)) − (P²/H²)·(R·H²·sin(qh)/2)
Now substituting the expressions we derived:
= J_q + (ρ/h)·[(ΔJ_{q−1} + ΔJ_q)/2] − (ρ²/h²)·[(ΔJ_{q−1} − ΔJ_q)/2]
= J_q + (ρ/h)·{(ΔJ_{q−1} + ΔJ_q)/2 − (ρ/h)·(ΔJ_{q−1} − ΔJ_q)/2}
= J_q + (ρ/h)·d
where d = (1/2)(d_b + d_a) − (ρ/h)·(d_b − d_a)/2 = (1/2)(d_b + d_a) − (d_b − d_a)·ρ/(2h)
This is exactly Bhāskara's formula. So Bhāskara's rule corresponds precisely to second-order (quadratic) interpolation, capturing the curvature of the sine function through the difference of neighboring tabular differences. Remarkable for the 12th century.
The Tabular Values Bhāskara Used
With h = 10° and R = 120°, Bhāskara's table of Rsines and Rsine differences looks like this:
k | Angular arc kh | Rsine difference ΔJ_{k−1} | Rsine J_k 0 | 0° | ** | 0 1 | 10° | 21 | 21 2 | 20° | 20 | 41 3 | 30° | 19 | 60 4 | 40° | 17 | 77 5 | 50° | 15 | 92 6 | 60° | 12 | 104 7 | 70° | 9 | 113 8 | 80° | 5 | 118 9 | 90° | 2 | 120
You can see immediately that the differences are decreasing — from 21 down to 2. This non-uniformity is exactly what makes naive linear interpolation inadequate and what motivates the correction term in Bhāskara's formula.
Munīśvara's Iterative Method: Going Further
Here's where it gets really clever. Munīśvara (also known as Viśvarūpa), writing in 1653 AD in his Sanskrit commentary Marīci on the Siddhānta-Śiromaṇi, was not satisfied with Bhāskara's single-step correction. He recognized that Bhāskara's formula, while an improvement over naive interpolation, was itself only an approximation — the first step in a process that could be iterated to achieve arbitrarily high accuracy.
His key insight was this: Bhāskara's formula computes d using the crude initial estimate d^(0) = ΔJ_q (the forthcoming tabular difference). But if we've already got a better estimate of d from Bhāskara's formula, why not plug that back in and compute an even better estimate? And then repeat?
Starting from d^(0) = ΔJ_q as the initial approximation (the zeroth approximation), Munīśvara's iterative formula generates successive approximations:
d^(r+1) = (1/2)(d_b + d_a) − (ρ/2h)(d_b − d^(r)) for r = 0, 1, 2, 3, ...
Or equivalently, using d_b = ΔJ_{q−1} and d_a = ΔJ_q:
d^(r+1) = m − (ρ/h)·(ΔJ_{q−1} − d^(r))/2
where m = (ΔJ_{q−1} + ΔJ_q)/2 is the mean of the two neighboring tabular differences.
This process continues until two successive approximations d^(M) and d^(M+1) agree up to the desired level of precision (in Munīśvara's case, up to the desired subdivision of degrees, where subdivisions go: 1° → 1' = 1/60 of a degree, 1'' = 1/60 of 1', 1''' = 1/60 of 1'', and so on).
The stabilized value is then taken as the true difference d, and the desired Rsine is computed as:
R·sin(α) = J_q + θ·d, where θ = ρ/h
Let's understand intuitively why this iteration makes sense. In the interval [qh, (q+1)h], the tabular difference has already crossed ΔJ_{q−1} at the start and will reach ΔJ_q at the end. Neither of these is the right difference to use for interior points — the true applicable difference d lies somewhere between them. Bhāskara estimates d by starting from the mean m and correcting for the decrease across ρ degrees. But the amount of decrease itself depends on d (which is what we're trying to find), creating a natural iterative structure.
Munīśvara explicitly provides the rationale: the amount of decrease in the Rsine difference across h degrees in the interval is δ = m − ΔJ_q. By the rule of three, the decrease across ρ degrees is (ρ/h)·δ. So the rectified difference is d = m − (ρ/h)·δ. When δ is computed using the current estimate of d, and the result is fed back as the new estimate, you get the iteration above.
Convergence: Proving It Works
This is where the story becomes truly modern in spirit. The convergence of Munīśvara's iteration can be established rigorously.
Write Munīśvara's iterative formula as:
d^(r+1) = m − (θ/2)·ΔJ_{q−1} + (θ/2)·d^(r), where θ = ρ/h, 0 < θ < 1
This is a linear recurrence of the form d^(r+1) = A + B·d^(r), with B = θ/2.
Replace r by r−1:
d^(r) = m − (θ/2)·ΔJ_{q−1} + (θ/2)·d^(r−1)
Subtract to get:
d^(r+1) − d^(r) = (θ/2)·{d^(r) − d^(r−1)}
Apply this relation successively for r = n, n−1, n−2, ..., 2, 1 and multiply:
d^(n+1) − d^(n) = (θ/2)^n · {d^(1) − d^(0)}
Now let's evaluate d^(1) − d^(0). We have d^(0) = ΔJ_q and:
d^(1) = m − (θ/2)·ΔJ_{q−1} + (θ/2)·d^(0) = (ΔJ_{q−1} + ΔJ_q)/2 − (θ/2)·ΔJ_{q−1} + (θ/2)·ΔJ_q
After expanding and using ΔJ_k = J_{k+1} − J_k:
d^(n+1) − d^(n) = (θ/2)^n · (1/2)(θ−1)·{J_{q+1} − 2J_q + J_{q−1}}
= (θ/2)^n · (1/2)(θ−1)·{R·sin((q+1)h) − 2R·sin(qh) + R·sin((q−1)h)}
Using the sum-to-product identity:
sin((q+1)h) + sin((q−1)h) = 2·sin(qh)·cos(h)
So:
{R·sin((q+1)h) − 2R·sin(qh) + R·sin((q−1)h)} = 2R·sin(qh)·cos(h) − 2R·sin(qh) = 2R·sin(qh)·(cos(h) − 1) = −2R·sin(qh)·(1 − cos(h)) = −2R·sin(qh)·2sin²(h/2)
Therefore:
d^(n+1) − d^(n) = (θ/2)^n · (1−θ) · R·sin(qh) · 2sin²(h/2)
Since 0 < θ < 1, we have 0 < θ/2 < 1/2 < 1, which means:
(θ/2)^n → 0 as n → ∞
Therefore |d^(n+1) − d^(n)| → 0 as n → ∞. For any pre-assigned positive ε however small, there exists a positive integer M such that:
|d^(n+1) − d^(n)| < ε for all n ≥ M
This establishes the convergence. The convergence factor is θ/2 = ρ/(2h), which is always less than 1/2 (since ρ < h), so convergence is geometric with ratio at most 1/2. This means the error roughly halves with each iteration — quite fast convergence by any standard.
The stabilized limit satisfies d = m − (θ/2)(ΔJ_{q−1} − d), i.e., d(1 − θ/2) = m − (θ/2)·ΔJ_{q−1}, giving:
d = (2m − θ·ΔJ_{q−1})/(2 − θ)
This fixed-point can also be obtained directly by solving the linear recurrence, and it represents the true second-order interpolated difference in the limit.
The Full Algorithm
Let's lay out Munīśvara's algorithm step by step as a clean computational procedure:
Step I: Divide the angular arc α by h (where h = 10°). Note the quotient q and remainder ρ, where 0 < ρ < h.
Step II: Note the preceding and forthcoming tabular Rsines J_q and J_{q+1}, as well as the preceding and forthcoming tabular Rsine differences ΔJ_{q−1} and ΔJ_q.
Step III: Compute the mean Rsine difference:
m = (ΔJ_{q−1} + ΔJ_q)/2
Step IV: Take the initial approximation:
d^(0) = ΔJ_q
Step V: Compute successive approximations using:
d^(r+1) = m − (ρ/h)·(ΔJ_{q−1} − d^(r))/2 for r = 0, 1, 2, 3, ...
Step VI: For each r, check whether |d^(r+1) − d^(r)| < ε, where ε is the desired precision. If satisfied, terminate and take d = d^(r+1) as the stabilized difference. Otherwise return to Step V.
(Note: Terminating at r = 0 gives Bhāskara's value.)
Step VII: Compute the desired Rsine:
R·sin(α) = J_q + θ·d, where θ = ρ/h
The beauty of this algorithm is its self-correcting nature. Each iteration uses the current best estimate of d to produce a better estimate. Since the convergence factor θ/2 is always less than 1/2, the method converges geometrically and typically stabilizes within 8–12 iterations for the precision levels Munīśvara was working with.
A Worked Numerical Example: R·sin(24°)
Let's run through the concrete numerical example that Munīśvara himself computed in the Marīci.
We want R·sin(24°) with h = 10° and R = 120.
Step I: α = 24°, h = 10°, so q = 2 and ρ = 4°. Therefore θ = ρ/h = 4/10 = 0.4.
Step II: From the table:
- J_q = J_2 = 41 (i.e., R·sin(20°) = 41)
- J_{q+1} = J_3 = 60 (i.e., R·sin(30°) = 60)
- ΔJ_{q−1} = ΔJ_1 = 20 (foregoing difference)
- ΔJ_q = ΔJ_2 = 19 (ensuing difference)
Step III:
m = (20 + 19)/2 = 19.5
Step IV:
d^(0) = ΔJ_q = 19
Step V onwards (applying the iterative formula):
r = 0: d^(1) = 19.5 − (4/10)·(20 − 19)/2 = 19.5 − 0.4·0.5 = 19.5 − 0.2 = 19.3
r = 1: d^(2) = 19.5 − (4/10)·(20 − 19.3)/2 = 19.5 − 0.4·0.35 = 19.5 − 0.14 = 19.36
r = 2: d^(3) = 19.5 − 0.4·(20 − 19.36)/2 = 19.5 − 0.4·0.32 = 19.5 − 0.128 = 19.372
r = 3: d^(4) = 19.5 − 0.4·(20 − 19.372)/2 = 19.5 − 0.4·0.314 = 19.5 − 0.1256 = 19.3744
Continuing this process through 10 iterations gives stabilized values converging to approximately:
d ≈ 19.375 (approximately 19°22'29''')
Step VII:
R·sin(24°) = J_2 + θ·d = 41 + 0.4·19.375 = 41 + 7.75 = 48.75
Compare with:
- Value by Bhāskara's method (terminating at r=0): R·sin(24°) ≈ 48.72
- Value by Munīśvara's method (after convergence): R·sin(24°) ≈ 48.7499...
- Modern exact value: 120·sin(24°) = 120 × 0.40674... ≈ 48.809...
More precisely, from the paper's numerical computations (which carried many more decimal places and used subdivisions of degrees down to the 10th subunit level):
Value of sin(24°) by Bhāskara's method = 0.40599999999999997200 Value of sin(24°) by Munīśvara's method = 0.40624999998720000000 Modern value of sin(24°) = 0.40673664307580020775
So: Munīśvara's sine − Bhāskara's sine = 0.00024999999872000028 Modern sine − Munīśvara's sine = 0.00048664320380020775 Modern sine − Bhāskara's sine = 0.00073664307580023575
Munīśvara's method is meaningfully more accurate than Bhāskara's — the error is reduced by roughly a factor of 2. Both methods are ultimately limited by the coarseness of the 10° tabular spacing; using finer tables would dramatically improve both results.
Why This Matters
Several things about this work deserve emphasis.
First, the conceptual sophistication is extraordinary. Munīśvara is essentially doing fixed-point iteration — a technique that in Western mathematics is formalized through Banach's fixed-point theorem (1922) — in the 17th century. He understands intuitively that the iteration converges (he says the value "becomes stable"), he has a clear stopping criterion (stability up to desired subdivision), and he can actually execute the computation numerically.
Second, the error analysis. The convergence proof sketched above shows the error decreases geometrically as (θ/2)^n, with θ < 1. This is geometric convergence, the same kind we see in Newton's method or successive approximations for ODEs. The fact that this convergence can be established using only trigonometric identities and elementary inequalities — no calculus required — shows how powerful the tabular approach is.
Third, the connection between the methods. Bhāskara's single-step formula turns out to be exactly the first iteration of Munīśvara's scheme starting from d^(0) = ΔJ_q. Munīśvara's method contains Bhāskara's method as a special case. This is the hallmark of a good generalization.
Fourth, the formula for utkramajyā (versed Rsine). The same framework works for computing R(1 − cos α), just with a sign change in the correction term:
d = (1/2)(d_b + d_a) + (d_b − d_a)·ρ/(2h) for utkramajyā
This sign change makes geometric sense: the versed sine is increasing and concave (in the relevant range), so the correction goes the other way compared to the ordinary sine.
Comparing with Western Interpolation Theory
Newton published his forward difference interpolation formula in 1687, and Bessel and Stirling developed central difference formulas in the early 18th century. Bhāskara's formula from the 12th century is equivalent to second-order Newton interpolation. Munīśvara's iterative refinement, while not identical to any standard Western algorithm, is conceptually related to iterative improvement techniques that became standard in numerical analysis much later.
The Indian approach is interesting precisely because it didn't come from a general theory of polynomial interpolation. It came from careful geometric and physical reasoning about the specific behavior of the sine function — its curvature, its decreasing differences. The particular led to a method that happens to have general significance.
Also noteworthy: Indian astronomers were acutely aware of the distinction between the difference applicable at a tabular point and the difference applicable in the interior of a tabular interval. This is a subtle point. The tabular difference ΔJ_k is the difference between adjacent tabular values, but the "true" instantaneous rate of change at any interior point is different from both flanking tabular differences. Munīśvara explicitly articulates this: neither ΔJ_{q−1} nor ΔJ_q is the right difference to use for α = qh + ρ; you need a weighted combination that accounts for the position of α within the interval.
The Precision of the Computation
One detail worth appreciating: Munīśvara worked in a positional notation for angles that went degrees → minutes (1/60 degree) → seconds (1/60 of a minute) → thirds (1/60 of a second) → fourths → and so on through at least 10 levels of sexagesimal subdivision. This is base-60 arithmetic carried to extraordinary depth.
The computational examples in the Marīci give the successive approximations to d corresponding to α = 24° as a sequence stabilizing (in the last two listed values) to 19°22'29'''. The final functional value for R·sin(24°) is given as 48°44'59''59'''48''''03'''''21''''''49'''''''36''''''''37'''''''''
in this sexagesimal notation extended to ten subunits. For context, one "tenth subunit" here is (1/60)^10 of a degree — an almost incomprehensibly small quantity. Whether carrying precision to this level was practically meaningful is a separate question, but the fact that the computational framework supported such precision speaks to the sophistication of the mathematical culture.
A Note on Transmission and Recognition
This body of work — Bhāskara's formula, Munīśvara's iteration, their convergence properties — has been almost entirely absent from standard histories of mathematics. The usual narrative jumps from Greek trigonometry to Islamic astronomers (who did preserve and extend Indian methods, particularly through al-Khwarizmi and Abu Rayhan Biruni) to European developments.
The reasons are partly linguistic (the primary sources are in Sanskrit, with technical astronomical vocabulary), partly institutional (Indian mathematical tradition was embedded in astronomical and astrological practice rather than pure mathematics), and partly historiographical (Western historians of mathematics have traditionally privileged certain traditions and text types).
But the mathematics speaks for itself. A convergent iterative scheme for function interpolation, with an implicit understanding of the geometric convergence rate, developed in 17th-century India and matching in precision the computations of contemporary European astronomers — this is not a curiosity. It's a significant chapter in the history of numerical methods, one that deserves to be far better known.
The next time someone tells you numerical analysis started with Newton or Euler, you can point them to Munīśvara's Marīci and the formula:
d^(r+1) = m − (ρ/h)·(ΔJ_{q−1} − d^(r))/2
Simple, elegant, correct, and 400 years old.