u/BrobaFett

Advice: You need to do the Math
🔥 Hot ▲ 58 r/RPGdesign

Advice: You need to do the Math

I've been thinking through this in my own design journey and I felt like sharing to the designers out there.

You need to do the math!

What do I mean by this? You need to have a general idea for how likely you are to succeed or fail at a task and what that might feel like at the table. This requires some basic understanding of probability. Do you need to be able to pass a stats course? No. But even Gygax recognized that understanding math made a huge impact at the table. You don't even really need to crack the texbook, but have a method to at least derive an answer to "how likely is this thing going to happen using my system"; tools like Anydice are very helpful here and there's dozens of posts on how to use them.

Why should you care?

Probability will reflect what you feel at the table. If your core resolution mechanic succeeds 90% of the time, the game will begin to feel somewhat stale and lacking in stakes. If you succeed 20-40% of the time? Your game will feel frustrating or perilous. With how infrequently dice are rolled, players get upset when their supposedly competent characters fail several times in a row. In fact, during playtesting, I've found that even a 50% chance of failure (a coin flip) feels bad to most people.

For me, the sweet spot of "you should probably be able to do this" PLUS "this task is hard and has a risk of failure" is around 60-70%. That feels, to me, like a great spot to shoot for.

A practical example

In my own system, I use a variant of MYZ: D6 dice pools, 6= success. Most tasks that are "HARD" require 1 success. Average tests require 0 successes but can generate complications (I won't get into the "yes and" "no, but" resolution mechanics here).

So, I'm looking for at least one 6. Two 6's if it's an exceptionally hard task (such as doing medicine when you have no training). Three if it's incredibly difficult. Four if you're doing something legendary.

Next, I look at my typical dice pool sizes. So an average guy might have an attribute of 2, career rank of 1-2 (I use careers, a la barbarians of lemuria instead of skill lists) and a gear bonus of 1-3 (gear is pretty helpful in succeeding tasks). Let's get ourselves an average dice pool of around 6d6 for a sort of "journeyman" trained person trying a "hard" task.

Next I PLOT the success probabilities to see what it looks like. And, you know what? I really like the look of that curve. My journeyman is going to succeed around 66% of the time. Around 26% of the time he's going to succeed AND some additional success will be rolled (in my system this can be paid forward as bonus dice or a narrative boon). A WELL trained and capable person (let's say 10d6)? They'll not only succeed much more often 83% of the time, but have a much better chance at positive complications.

Add a "push, but at a cost" mechanic? Now we're thinking with portals.

This has paid in dividends at the table

DO. THE. MATH.

I'm adding u/ksarlathotep's post below because it's so good:
**"**I agree with everything OP said, but I wouldn't stop at calculating success probabilities for individual dice rolls. You really should also be looking at the distribution of results.

A single die roll (plus / minus some value) always generates a linear distribution, which is historically common, but really quite inelegant if you're looking for realism.
If the same character attempts the same task 10 times, they shouldn't be equally likely to have a few dramatic failures, a few extraordinary successes, and a few average results. When an experienced blacksmith creates 20 longswords, they don't statistically set the smithy on fire once. The average results should be the most common, with extreme outliers much rarer. In other words, you want a bell curve distribution.

D&D has used single d20 resolution (with linear distribution) forever, but gets around this in newer editions with the concepts of Advantage and Disadvantage. As soon as you roll multiple dice and use the highest, lowest, or center value, your distribution moves away from linearity. Dice pool systems achieve this by default. If you roll 10d10 and count 6+ as a success, 10 successes and 0 successes are both extremely unlikely, while 4-6 successes is by far the most common outcome.

However, when you have a distribution like that, you also need to be aware that increasing a target difficulty by +X doesn't always have the same effect on the success probability.
If you have a straight 1d20 vs. target difficulty check, then every adjustment of the difficulty by +1 makes success 5% less likely, and every adjustment by -1 makes success 5% more likely.
If you roll 10d10 and count 6+ as success, raising the required successes from 6 to 7 has a larger impact on the overall probability of success than raising the required successes from 9 to 10.

Other dice systems come with idiosyncrasies of their own. I believe it was Shadowrun 2nd edition (don't quote me on this though) where you rolled a pool of d6 versus a variable target difficulty, and under certain circumstances, 6s would "explode" (so you roll the dice again and add the second number to the original 6). But this of course means that in those circumstances, increasing the difficulty from 6 to 7 has no effect whatsoever on the probability of success. Anytime you roll a 6, you end up with at minimum a 7, so you get a weird jump in the probability distribution.

Other systems generate very awkward distributions - they don't necessarily feel bad at the table, they're just incredibly hard to work with, and success probability is very hard to gauge intuitively for newer players. The "roll and keep" system from Legend of the Five Rings is an example of such a system. Almost nobody who hasn't either run the numbers or has immense experience with the system can tell you whether 6k2 or 3k3 is better on average.

This is of course where linear distributions have their upside. As unnatural and weird as I find it to have every possible outcome be equally likely, on a straight 1d20 vs. target number roll, even a complete newbie can immediately tell what chance of success they have.

If you design a dice system without taking the distribution of results into account, you risk building in trap options in character generation (where raising a certain attribute won't increase your odds of success), building a system in which critical failures and successes are absurdly common, creating "diminishing returns" or "escalating returns" effects for character advancement in places where you don't want them, all kinds of effects that may not be immediately obvious, but can quietly generate very weird and unsatisfying play experiences.

So yeah, definitely do the math."

Lastly, I think as others have pointed out, this is Step #1. Step #2 is to play the game. You can trap yourself into thinking that the math looks perfect/elegant/etc but when you get to the table the game itself plays differently than what you expected. The Law of Large Numbers still applies! Playtesting doesn't refute mathematical facts. So don't go to the table expecting your experience to refute probability (it doesn't). But do understand what your chosen success/failure thresholds feel like in actual play.

u/BrobaFett — 2 days ago