
Hi everyone. I’m currently finishing up my Master’s here in Sydney, and like a lot of you, I’ve spent way too much time trying to make things like GPT-4 or Claude actually useful for actuarial work.
The issue is pretty clear: those models are built for language, not math. They’re essentially just guessing the next most likely word instead of actually calculating anything. I got tired of getting confidently wrong answers for things like life contingencies, or seeing math notation that was basically unreadable.
So, I’ve spent the last few months building a verification layer called Actua. Here is how I’m trying to do things differently:
• The Math Engine: Instead of just "predicting" a number, it translates the question into symbolic Python logic and runs it in a deterministic sandbox. If the math doesn't actually work out, it doesn't show the result.
• Actual Notation: It renders proper LaTeX. It actually understands annuities, life contingencies, and standard commutation functions instead of just giving you plain text.
• Privacy: Since I’m a student and not a data broker, I have no interest in your data. It doesn’t store your conversations or your files.
I’ve put a free sandbox version up at actua.dev.
I am not looking for customers right now: I am looking for skeptics. Actuaries are better than anyone at spotting errors, so if you have a reserving problem or a derivation that ChatGPT keeps messing up, please throw it at Actua. I want to see exactly where it breaks so I can keep improving the underlying math logic.
Would love to hear your feedback or even just a good roast.