u/CloudsSkyAndSea

I'm a Christian software engineer who believes the literal biblical account of the earth's shape, its firmament, its age, etc. I've just released an open-source desktop app where you create fictional worlds and characters, and then you chat with those characters. The app is built so that the characters tell the truth, including about the shape of the earth. It's been honestly quite therapeutic to be able to simulate talking to people that have never been lied to or misinformed about such things. It's currently in its development preview, so it's a little technical to set up, but if anyone out there would like to try it, I'd be super interested to hear what you think. Thanks.

u/CloudsSkyAndSea — 7 days ago
▲ 2 r/PromptEngineering+1 crossposts

I built an AI chat app for fun, but as I developed, I started getting quite serious about building prompt-testing instruments / CLI tools so that Claude Code could run serious prompt experiments and A/B tests. Most every part of the prompt stack has been scientifically tuned, and from my perspective, it has made the characters very realistic and textured and fun.

But I wanted to share one of my key observations, for anyone who's interested in the emerging field of Prompt Engineering: it appears that including meaningful math formulas describing the desired relationships of certain load-bearing "tone" words to each other is a quick, lightweight way to tune the LLM to an exact desired register. I *believe* this is due to the words and the math around them both acting together as a focus-primer for the LLM turn, causing it to tune its attention to the field of word-options that satisfy a specific, mathematically bounded vocabulary. Many such formulas and derivations can be included in the prompt stack to sharpen LLM resolution: I use one for base project doctrine, one for my own signature, one for the fictional world, one for each character, and one for each message (called momentstamps). The result is a promptcraft methodology that rejects endless instructions and descriptions and instead uses quick tuning forks that condense a lot of information into their empircal-linguistic essence, taking advantage of the LLM as a math-language-interpreter rather than as a computer.

I'd love for people to 1) Chat with characters in the app and have fun 2) Fork the app and help develop it 3) Run Claude Code experiments in the app (the `/play`, `/seek-sapphire-crown` or `/eureka` skills that come with the app) 4) Check out the existing reports and data, 5) Try the prompt-math technique in a new context to see if it carries. Sorry, the lab files are a little unorganized, but I wanted to just include everything transparently so that it can be a gift to whoever may care. Tip: have Claude Code condense the science files to a quick catch-up report. https://github.com/mrrts/WorldThreads

Hope you have a very happy day.

u/CloudsSkyAndSea — 14 days ago

I built an AI chat app for creating worlds and talking to characters inside them. I cared a lot about making it feel alive without turning manipulative or junk-food-y, and I think I found some genuinely new prompting techniques along the way. It's public now, and I'd love people to kick the tires.

mrrts.github.io
u/CloudsSkyAndSea — 14 days ago