Longsword vs. Scimitar.
The d20 is flat as hell. 2d10 provides a nice curve for character success.
d20 is so popular but at a cost of utility. 2d10 allows a level more roll-to-roll.
But, what are your experiences with d20 and 2d10? Is d20 really keepsake or no?
Your responses indicate a lot regarding how the D&D-related d20 has impact.
Thanks.
My experience is that 1d20 for attacks does actually create a bell curve because it generally requires multiple attacks to actually drop a foe. This means that while any given attack has a flat probability, in aggregate, combat is a bell curve.
Where the d20 needs assistance is instances where single rolls are used to determine the outcome.
Quote from: Chris24601;1087365My experience is that 1d20 for attacks does actually create a bell curve because it generally requires multiple attacks to actually drop a foe. This means that while any given attack has a flat probability, in aggregate, combat is a bell curve.
Where the d20 needs assistance is instances where single rolls are used to determine the outcome.
Are you taking modifiers into account? Sure, on a length, d20 can simulate a curve, but what impact do modifiers have?
I find the probability of rolling a 1 or 20 far greater w/ d20 than 2d10. Maybe my dice are emotional.
I'd need to see a diagram of d20 demonstrating a curve --- but I'm jaded. Show me good when I only see evil.
Not doubting any where as near as much just searching for PROOFS you say exist. Then I can be sated.
D20 = achieves 1-20 result without math = faster than 2D10
Oddly, I find the same people who can add 2D6 quickly get into trouble with 2D10. It's adding into the teens.
Quote from: Spinachcat;1087371D20 = achieves 1-20 result without math = faster than 2D10
Oddly, I find the same people who can add 2D6 quickly get into trouble with 2D10. It's adding into the teens.
Yes.
Is it adding, or the dice?
Quote from: Chris24601;1087365Where the d20 needs assistance is instances where single rolls are used to determine the outcome.
The d20 is good for combat, where the wild swings feel appropriate. And everyone loves rolling a natural 20.
But it's not so great for skill checks (where single rolls determine the outcome.)
I've never tried 2d10, but I have tried 3d20 take middle value. It worked, but some of the excitement in combat faded away.
20-sided dice are all about the 20-sided die. It's a fetish for most players at the table that know nothing about die mechanics.
Quote from: Shawn Driscoll;108737720-sided dice are all about the 20-sided die. It's a fetish for most players at the table that know nothing about die mechanics.
The depth of this comment is the beginning and end of everything I posted this thread about.
So --- yeah.
If you want to introduce a bell curve use 3d20 take middle. It keeps the range of numbers the same. In stressful situations such as combat revert to 1d20.
For D&D, the d20 makes a lot of sense. You'd need to change more than the dice, to make 2d10 work well. (It could work somewhat OK with a straight substitution, but there would be rough spots.)
For another game (such as my never-finished homebrew), 2d10 makes more sense, because the game is built from the ground up to expect it. Not least, the appropriate modifiers to rolls for d20 versus 2d10 are different. They should be generally smaller and more rare for 2d10.
I rather like 2d10 (or even 2d12) as a middle ground between d100 or d20 versus the GURPS/Hero 3d6. The curve on 3d6 is a little steep and short for my tastes, when everything is a skill check. The biggest drawback to 2d10 is that you are always rolling 2 dice instead of 1. Strangely enough, the adding of the two dice isn't the problem for my groups. The precise add against a target number only matters when it is close. If you roll two low or two high numbers on the d10s, you probably know at a glance whether you made it or not. No, the issue is that its twice as many opportunities for a die to go sliding out of control. More, anything over one die makes it difficult to roll multiple checks at once. I'm convinced that's why the d20 is so popular--handling time is inherently at an advantage with it, and that adds up rapidly.
Quote from: Theory of Games;1087372Yes.
Is it adding, or the dice?
Less combinations means that the combinations that there are can be chunked in long term memory more easily.
6 sided dice are also a lot more common generally - we're probably already familiar with them from board games (eg Monopoly), war games etc, so we already have those chunks. We likely don't actually add 3 and 4 on two six sided dice. We see the numbers (or pips) and recognise that they mean 7 without actually needing to do addition.* (It's probably the smaller more common numbers too - but likely the context of the physical objects as well- context is very important to memory**.)
We're less familiar with d10s. Likely we'd get there eventually if we played enough games with 2d10.
*This is similar to studies which have shown that chess grandmasters can remember the places of every single piece on a chessboard from a glance (as long as they're looking at an actual in progress game and not a random configuration of pieces.)
** If you have to study for an exam you should study in the same room as the exam if you can. If you can't, you should make sure you study in different locations - because you need to make that knowledge independent of context.
Quote from: Theory of Games;1087366Are you taking modifiers into account? Sure, on a length, d20 can simulate a curve, but what impact do modifiers have?
Modifiers are irrelevant.
If you have to roll the d20 more than once to drop a foe, the overall results are going to start conforming to a bell curve on the second roll.
Let's say an ogre has about 18 hit points, you have two attacks per round, need a 11 or better to hit and your sword does 1d8+3 damage and a natural 20 does double damage.
You could, in theory, drop the ogre with one hit; a natural 20 and a damage roll of at least 6 on the die or better will drop it. But the odds of this are very very slim (far less than the 5% of simply rolling a natural 20).
But on average, its going to take you around three hits to drop the ogre. That means rolling to hit probably about six times, but if you roll well you might need only four, or poorly you might need eight or nine rolls.
If you repeated that battle twenty times, you'd see that the overall results of the battles (whether you win, how many turns it takes to win, how much damage you take in the course of winning) will fall into a bell curve distribution because each battle takes multiple rolls to resolve.
So many of these dice threads suggest somehow that a bell curve is somehow superior to a linear distribution. It doesn't model reality either way. They are simply rules about how to roll dice. Neither of these methods actually produce a more "realistic" or better game. One simply has an equal chance of generating any number in its set while the other is biased towards the middle. There is no magic, moral value, or advantage without context within a larger rule set.
I prefer linear distribution as it lets players more quickly determine their odds of success and allows them to accurately weigh risk and reward.
Hmm...
To a point. The problem is what do you do with a 40% chance of failure at something your supposed to be competent at? What can you do with that? Does know your percentage chance of failure in advance make failure less frustrating?
In any case one doesn't usually calculate the exact odds. And I know pretty well that if I need to roll a 15 on 3d6 then odds are it's not happening. It''s enough to know the shape of the curve and have a sense of how steep it is. It's only really opaque in something like the Old World of Darkness system when you have dice pool systems with variable target numbers.
No, I think the main disadvantage of bell curves (aside from potential handling time) is the same as their advantage, predictability. Linear dice tend to be dramatic when you want consistency and bell curves are consistent when you want drama.
Quote from: Chris24601;1087365My experience is that 1d20 for attacks does actually create a bell curve because it generally requires multiple attacks to actually drop a foe. This means that while any given attack has a flat probability, in aggregate, combat is a bell curve.
Where the d20 needs assistance is instances where single rolls are used to determine the outcome.
Yes basically this.
As soon as you create a subsystem that handles things with more than one roll the D20 is fine. The big problem is flat one and done skill rolls.
In my home brew; I've written out instructions for D20 Roll High, D20 Roll Under, and 2d6 Mechanics.
I'm personally leaning toward 2d6 forever; but if I receive a credible complaint, I'm ready to switch over to d20.
I like 2d6; with -2 to +3 modifiers, etc.
So therefore; I could see the appeal of 2d10. If that's what the DM likes, they should give it a go.
Quote from: Theory of Games;1087359Longsword vs. Scimitar.
The d20 is flat as hell. 2d10 provides a nice curve for character success.
d20 is so popular but at a cost of utility. 2d10 allows a level more roll-to-roll.
But, what are your experiences with d20 and 2d10? Is d20 really keepsake or no?
Your responses indicate a lot regarding how the D&D-related d20 has impact.
Thanks.
Speaking for myself, I don't see any utility in the 2d10 mechanic. I see a lot of dis-utility in it. I'm really indifferent to modern D&D's use of the d20.
If you're interested in 3d20 take mid, you might find this interesting:
Quote from: M&M 3E (OGL)Another means of adding a "bell curve" to M&M die rolling is by using high-low rolls: in place of any single d20 roll, roll three 20-sided dice and take the middle number (dropping the highest and lowest). If two or more dice come up the same number, use that number (since the third die is by definition higher or lower).
This method tends to produce results weighted more toward the middle range, with 10 as the average. Rolling a "natural 20" requires two of the dice to come up 20 (about a 1-in-400 chance or 0.025% rather than 1-in-20 or 5%). The same is the case for a "natural" 1. Generally, this means characters achieve the effect of their routine checks more often, but succeed at high Difficulty tasks less often, and have fewer critical successes or failures. High-low rolls involve more dice, but are only slightly more involved than rolling and reading a single d20.
Spending a hero point with high-low rolls allows the player to keep the best die roll of the three dice rather than the middle roll. So a roll of 4, 11, and 18 would normally count as an 11. Spending a hero point makes it an 18 instead. If all three d20 rolls are below 11, take the highest and add 10 to get the result of spending a hero point on that roll.
Quote from: Lunamancer;1087446Speaking for myself, I don't see any utility in the 2d10 mechanic. I see a lot of dis-utility in it.
What is the dis-utility in 2d10?
Quote from: Aglondir;1087376The d20 is good for combat, where the wild swings feel appropriate. And everyone loves rolling a natural 20.
But it's not so great for skill checks (where single rolls determine the outcome.)
Exactly this. Which is why I don't favor 1D20 roll under attributes for "skill checks". Instead I like to use a modified reaction roll table - roll 2D6 add attribute / situation mods and generate a non-binary, bell curve result. But in combat, yeah, I'll just stick with the D20 - things
should feel chaotic and unpredictable.
Quote from: Aglondir;1087468What is the dis-utility in 2d10?
Probabilities are less obvious.
Intervals are uneven.
Lower resolution in the middle where 80% of the game is played.
Just to name a few of the easy ones.
Quote from: Aglondir;1087376The d20 is good for combat, where the wild swings feel appropriate. And everyone loves rolling a natural 20.
But it's not so great for skill checks (where single rolls determine the outcome.)
These days I think I'm leaning more to the opinion that its skill checks that are the problem not D20s.
Quote from: Beldar;1087397So many of these dice threads suggest somehow that a bell curve is somehow superior to a linear distribution. It doesn't model reality either way.
Wrong.
The central limit theory says that when many random distributions contribute to an overall result then the probabilty distribution tends to a bell curve.
https://en.wikipedia.org/wiki/Central_limit_theorem (https://en.wikipedia.org/wiki/Central_limit_theorem)
The example that I show students is data for average speed of cars in different speed limit zones. So - that's averaged over many cars (brand, model, age, conditions), many drivers (age, sex, experience) and many weather conditions. The distributions - basically a bell curve.
I need the "Math People" to come out here because --- 1d20 is evil. If it isn't --- okay.
I try not to use d20 but it calls me. I like the whole of what's been posted here. I like 'hands on my head looking for change with no clear vision' action.
*shaking two d10s*
Id rather just go percentile rolls. :cool:
Part of the problem is the very bell curve you are touting.
In D&D going 2d10 instead would totally fail the system. You'd need a new system to accomodate the curves peak. You might as well design your own game like others have.
Quote from: Omega;1087502Part of the problem is the very bell curve you are touting.
In D&D going 2d10 instead would totally fail the system. You'd need a new system to accomodate the curves peak. You might as well design your own game like others have.
Well it would cause problems if you used it in combat.
If you just used it out of combat I don't really see any issues. The average is slightly higher - 11 instead of 10.5, but that's negligible.
There's no intrinsic reason why you must use the same die rolling mechanism in all circumstances.
More food for thought:
Quote from: Unearthed Arcana (OGL)Metagame Analysis: The (3d6) Bell Curve
Game balance shifts subtly when you use the (3d6) bell curve variant. Rolling 3d6 gives you a lot more average rolls, which favors the stronger side in combat. And in the d20 game, that's almost always the PCs. Many monsters--especially low-CR monsters encountered in groups--rely heavily on a lucky shot to damage PCs. When rolling 3d6, those lucky shots are fewer and farther between. In a fair fight when everyone rolls a 10, the PCs should win almost every time. The bell curve variant adheres more tightly to that average (which is the reason behind the reduction in CR for monsters encountered in groups).
Another subtle change to the game is that the bell curve variant awards bonuses relatively more and the die roll relatively less, simply because the die roll is almost always within a few points of 10. A character's skill ranks, ability scores, and gear have a much bigger impact on success and failure than they do in the standard d20 rules.
Quote from: Aglondir;1087534More food for thought:
So one beef I have with that, which I guess permeates this entire discussion, is if I were designing an RPG and statting things out with 3d6 in mind, I wouldn't assign the same stats I would have if I were writing with a d20 in mind.
Using 3-core AD&D 1E for example, because I'm very familiar with it:
The best armor in the game, plate & shield, gives AC 2. The worst, unarmored, AC 10. The average man (0th level human) needs a 19 to hit AC 2, an 11 to hit AC 10. On a d20 that means the best armor effectively blocks 90% of attacks (the average man being the baseline). And for a regular guy to hit someone unarmored? Even odds, 50/50.
If I were designing the game with 3d6 in mind, I would NOT preserve the existing THAC0's and AC's of the game. That's all just nonsense anyway. Abstract game concepts. What I really care about is what the numbers stand for. Which is to represent the best armor as repelling 90% of blows as a baseline, and no armor to be a 50/50 chance. Thus under a "d20" style system, but using a 3d6 mechanic, I'd probably give the 0th level human a +0 BAB, and the AC of platemail and a shield would be 15. Unarmored would be AC 11.
This brings me to note a few things.
First, this only gives me 5 categories of AC instead of 9. If I like more detail or variety or a higher res game, this is certainly a problem.
Second, there is no bell curve. Under either system. The numbers on the dice roll, just like the game stats, are abstractions and meaningless by themselves. Their effects are what have meaning, and in this there are only two outcomes, hit or miss. Two outcomes will never look like a bell. And they will be just as even or uneven as they would under a linear mechanic.
Third, those lucky hits weak creatures deal to threaten PCs as mentioned in the quote would thus be just as common under either system.
Fourth, just as I would have adjusted my game numbers in my design with the die mechanic in mind, I anticipate players and DMs will also adjust.
So I feel there's a pretty major disconnect in this discussion between doing math and what the numbers actually mean.
Quote from: Lunamancer;1087556So I feel there's a pretty major disconnect in this discussion between doing math and what the numbers actually mean.
Not really. It's simply that most everyone is taking shortcuts in the discussion, which makes it appear to be a disconnect. It's when you model the reality of how competent a population is at a skill that prompts the desire for a distribution with a curve instead of linear.
In a curve, a modifier of +1 has a different meaning at different levels of competency. This happens to model somewhat well against the curve of competence in reality. Not perfectly, but better than a linear distribution with equal modifiers. There tends to be rapid learning at first, followed by slowing learning, followed by more and more work to eke out any significant advantage. If one wants to use small modifiers that change meaning over this range of competency, then a curve starts to look fairly good. Of course, you could get a very similar effect with a d20 or another linear roll by monkeying with the modifiers to make them relatively scarce for beginners and experts and common for those in the middle.
When I say I prefer the 2d10 for this model over the 3d6 or 1d20, that's shorthand for I find that the percentage chance of success using +1 modifiers on a 2d10 maps reasonably close to the skill curve that I want, both mathematically and aesthetically. (Not least because I want something akin to reality, but skewed to fit a particular style of fantasy.) Furthermore, while the odds are more difficult than the d20, at least the 2d10 odds map to 1% increments with each jump. Show a normal player a map of the results of 2d10 with the percentages for each roll, it intuitively makes sense to many of them. (That's why I prefer the 2d10 to 2d12. I like the spread and results of 2d12 even more, but not enough to give up the ease of understanding for most players.) A mechanic needs to balance all of these concerns in a way pleasing to the players.
Quote from: Steven Mitchell;1087575Not really. It's simply that most everyone is taking shortcuts in the discussion, which makes it appear to be a disconnect. It's when you model the reality of how competent a population is at a skill that prompts the desire for a distribution with a curve instead of linear.
It's possible. But your further explanation also seems to show that disconnect, presumably without your even noticing.
QuoteIn a curve, a modifier of +1 has a different meaning at different levels of competency. This happens to model somewhat well against the curve of competence in reality. Not perfectly, but better than a linear distribution with equal modifiers. There tends to be rapid learning at first, followed by slowing learning, followed by more and more work to eke out any significant advantage.
I don't know that's true.
First, suppose I am using a 2d10 system. Say my initial skill is such that I need to roll a 20 to succeed. My next point of skill makes me 2% better. The next one after that makes me 3% better. The next one after that makes me 4% better, and so on. It's not until I get to where I need an 11 that each skill increase comes with diminishing returns to my probability of success. Personally, I don't consider it an automatic bad thing for skills to be harder to learn at lower levels. If anything, it might deter building characters against archetype, which if your into niche/archetype protection could be a good thing. My point is, half of the curve does the exact opposite of what you describe, and what you seem to be saying is good and "realistic". If I adopt your values and preferences, I'm stuck concluding that this idea is just equally dumb as it is good.
Second, its extremely common in RPGs for successive higher skill points to cost more and more. The distribution of skills themselves isn't linear, and so there's not really a need for this to be baked into the dice rolls.
Third, even if you don't design for higher and higher costs of learning, it's highly unlikely that the collective free choices of players will result in a linear distribution.
Forth, diminishing returns happen even when you don't design them in. I often point out that 1E advancement really flat-lines after name level. A 9th level fighter has a 12 THAC0. If he's got a total of +5 in hit bonuses from a combination of strength and magic items, which most people agree is reasonable if not humble for that level (a +3 sword and 18/70 STR will do it), that means he hits AC 5 with a 2 or better on the d20. And hit probabilities don't get any better from then on. I've done a statistical analysis of the Monster Manual. The mean, median, and mode AC is 5. This includes some of the more powerful ones like a hydra or a stone golem. Against more than half the monsters, the fighter gets no better chance of hitting as he gets better. And as he does get better, the list of monsters against which his attacks strike with greater accuracy gets smaller and smaller. Even though the THAC0 (that silly nonsense abstract game stat) continues to advance at the same rate, how meaningful that is in terms of how much it actually benefits the character overall on the course of the adventure keeps diminishing.
Fifth, the crazy thing about probabilities is they're necessarily bound by 0% and 100%. As you approach either extreme, the curve goes horizontal. Which is unlike the rest of the curve. On a 2d10 system, if I need a 22 to hit, my probability is the same as if I need a 21. 0%. Or 1% if you rule a 20 is an auto success. But regardless of whether it is 0% or 1%, the curve is linear and horizontal at that point. But this also means even a linear mechanic isn't really linear, as the line has to bend near the upper and lower bounds. The dirty little secret is, regardless of the mechanic, they all roughly replicate an "S"-curve when graphing the probability of success against incremental adjustments, left-to-right lowest-to-highest probabilities for success. In the big picture, it's all the same. All we're ever doing is nitpicking the smallest differences.
QuoteWhen I say I prefer the 2d10 for this model over the 3d6 or 1d20, that's shorthand for I find that the percentage chance of success using +1 modifiers on a 2d10 maps reasonably close to the skill curve that I want, both mathematically and aesthetically. (Not least because I want something akin to reality, but skewed to fit a particular style of fantasy.) Furthermore, while the odds are more difficult than the d20, at least the 2d10 odds map to 1% increments with each jump. Show a normal player a map of the results of 2d10 with the percentages for each roll, it intuitively makes sense to many of them. (That's why I prefer the 2d10 to 2d12. I like the spread and results of 2d12 even more, but not enough to give up the ease of understanding for most players.) A mechanic needs to balance all of these concerns in a way pleasing to the players.
Well, unfortunately when it comes to balancing things out, 2d10 lacks something that I find really important. Any even number of dice lacks a 50/50 point. Which again, when I get away from math and concern myself more with what meaning those numbers have in the game, if I'm trying to adjudicate something that's maybe really complicated or not covered by the rules, the 50/50 point is a good starting point. In my break down of the 1E hit probabilities, the 0th level human vs AC 10 is just that. We're taking basest fighting skill against the basest defensive capacity and we're saying, well, all things equal, it's 50/50. I think the 50/50 point is vital to making sure the abstract model actually connects to actuality.
Quote from: Anselyn;1087499Wrong.
The central limit theory says that when many random distributions contribute to an overall result then the probabilty distribution tends to a bell curve.
https://en.wikipedia.org/wiki/Central_limit_theorem (https://en.wikipedia.org/wiki/Central_limit_theorem)
The example that I show students is data for average speed of cars in different speed limit zones. So - that's averaged over many cars (brand, model, age, conditions), many drivers (age, sex, experience) and many weather conditions. The distributions - basically a bell curve.
I'm confused. I don't disagree with your assessment at all. Yet I fail to see how either type of dice mechanical models reality better than the other. If you average all the attack roll results out with either method you will get a distribution something like a bell curve. It has nothing to do with one being more realistic than the other. I would suggest that it is not possible for any die rolling rule to be "realistic" anyway.
Quote from: Beldar;1087601I'm confused. I don't disagree with your assessment at all. Yet I fail to see how either type of dice mechanical models reality better than the other. If you average all the attack roll results out with either method you will get a distribution something like a bell curve. It has nothing to do with one being more realistic than the other. I would suggest that it is not possible for any die rolling rule to be "realistic" anyway.
Technically Anselyn is actually wrong on this. So are a lot of people. Here's an example illustrating why taken directly from this thread:
Quote from: Chris24601;1087396You could, in theory, drop the ogre with one hit; a natural 20 and a damage roll of at least 6 on the die or better will drop it. But the odds of this are very very slim (far less than the 5% of simply rolling a natural 20).
But on average, its going to take you around three hits to drop the ogre. That means rolling to hit probably about six times, but if you roll well you might need only four, or poorly you might need eight or nine rolls.
If you repeated that battle twenty times, you'd see that the overall results of the battles (whether you win, how many turns it takes to win, how much damage you take in the course of winning) will fall into a bell curve distribution because each battle takes multiple rolls to resolve.
If you were to graph out how rounds it takes the ogre to go down, I assure you, you will NOT end up with a bell curve distribution. At a minimum, it's going to take one hit while there is no theoretical maximum. So there's no symmetry. And that's fine, because bell curves can be perturbed. But there's also only a tail on one side.`I doubt the left side even starts to tail out. It lacks one of the key inflection points characteristic of a bell curve. This is not necessarily a bad thing. Bell curves in the real world are not as common as assumed. This example is actually more of a perturbed Pareto distribution rather than a bell curve.
[ATTACH=CONFIG]3390[/ATTACH]
Lunamancer, you are assuming a 2d10 system where the range starts at the bottom end. If instead, you set the base target numbers near the middle of the curve, then a combination of difficulty and skills interacting will have the effect I described.
As for the 50% in the middle thing, I don't miss it much. Instead, I prefer something around the two-thirds mark. 2d10 versus base target of 10 (then adjust for mods and difficulty) sets it at 64%, close enough for me. Also, I'm going to handle extreme edge cases via GM adjudication, not a roll. For rolls, I'm only interested in the things that can reasonably map into the mechanic chosen. For 2d10, that means that the bottom 2 or 3 and the top 2 or 3 parts of the range will seldom be used in practice.
The advantage to dice that produce a distribution curve (like 2d10) tends to be when you're comparing two or more individuals with different levels of ability.
Take a party of 5 individuals. Let's say that you're attempting to open a lock and that anyone can make the attempt against DC 15. One member of the party is skilled (+5), and the other four are not (+0). In this case, what are the odds that the skilled person fails but one of the unskilled people succeeds?
We know that the skilled person needs to roll a 10 or better (55% on a d20). 45% of the time, they fail. Assuming they rolled a 9 or lower, there is a 14/20 chance (70%) that each individual fails; and a 30% chance that they succeed. The odds that all four fail are (.70 * .70 * .70 * .70) 24%; the odds that at least one succeeds is 76%.
With 2d10 the 'skilled character' has a 64% of success (36% chance of failure) and each of the other characters has a 21% to succeed and a 79% chance to fail. The odds that all four fail is 39% - only a 61% chance one of them succeeds.
In the first case (1d20) having four unskilled people make the attempt is worth more than having one skilled person make the attempt. In the second case, having one skilled person is worth more than having four unskilled people make the attempt.
Even on a d20, the expert succeeds MORE than any unskilled individual, but not necessarily MORE than several unskilled individuals.
Quote from: deadDMwalking;1087643Even on a d20, the expert succeeds MORE than any unskilled individual, but not necessarily MORE than several unskilled individuals.
Which was one of the reasons the Take 10 and Take 20 rolls were introduced -- so that performance levels at various levels of ability could be standardized when there was no drama factor about the challenge. If you included all successful uses of Take 10 and Take 20 in your set of d20 "rolls", I suspect you'd get something much more like a bell curve.
I myself am a fan of the 2d10 bell curve simply because I like more predictability in a roll than a completely flat 1-20 distribution gives, while still liking some unpredictability. I've gradually come to the belief that bell curves also work best when you can adjust the effort or commitment you put into a task in a way that measurably affects your chances of success -- for gambling to be a game of skill at all you have to know when and how to shift the odds in relation to the size of your stake.
Quote from: deadDMwalking;1087643Even on a d20, the expert succeeds MORE than any unskilled individual, but not necessarily MORE than several unskilled individuals.
How do you figure? I could generate the similar probabilities using a d20. The difficulty level of the door is 17 rather than 15. To be "skilled" means having +9 rather than +5. That would be the real apples-to-apples comparison. Not locking in the stats and just thoughtlessly computing the probabilities from there. I covered this in an earlier post on this thread not too long ago:
Quote from: Lunamancer;1087556If I were designing the game with 3d6 in mind, I would NOT preserve the existing THAC0's and AC's of the game. That's all just nonsense anyway. Abstract game concepts. What I really care about is what the numbers stand for. Which is to represent the best armor as repelling 90% of blows as a baseline, and no armor to be a 50/50 chance. Thus under a "d20" style system, but using a 3d6 mechanic, I'd probably give the 0th level human a +0 BAB, and the AC of platemail and a shield would be 15. Unarmored would be AC 11.
I mean it's not like if I were trying to convert something over from Vampire: the Masquerade to Call of Cthulhu that I'd look and see, "Oh, 5 dots. That must mean the same thing as 5 points of skill."
Can you make the same case placing probabilities first and then choosing stats to conform to the probabilities, allowing the stats to be different as one would expect them to be under different systems?
Quote from: Lunamancer;1087631Technically Anselyn is actually wrong on this. So are a lot of people. Here's an example illustrating why taken directly from this thread:
I agree. I wasn't suggesting that the overall combat outcome probablilties would be a bell curve - although I agree CLT might pull you towards that conclusion and I might have been saying that. I was thinking more of the "realism" of modelling the average probabilty to hit of a fighter given all the variables that might contribute to that. Others have contributed on this point. I would suggest that a bell curve will usefully and helpfully clump the result around the users standard ability/skill on some numerical scale which might be desirable.
Obviously, the idea of realism depends on modelling how the world works in the modellers view. I have been led to believe (sorry - can't cite[1]) that giving somebody a skill typically takes someone from not being able to do something to being able to do it - at a threshold level -and then may increase ability later. Even for something like weighlifiting, which might sound like "you've got the muscles or you haven't", initial training will give a strong boost in the weights you can lift. People don't tend to have a 50% chance of completing a test - they do jump from needing 9+ on 2d6 to 5+ on 2d6 - in Traveller terms.
Having said that (sorry - 1 academic, 3 opinions), I'll be marking 150 maths exams later this week (and next ....) and someone will probably score 50% on the exam. Now, we do give partial credit but someone on 50% probably won't have scored 50% on all questions - they will have all the marks for some questions and no marks for others, and some partial answers. I guess that means they have a 50% ability in the maths for my course?
[1] Not Stephen Mitchell above - but a friend, sadly no longer with us, who edited a book on this topic for his publisher and then told me about it. I guess it was a sports science book.
Gearheads are very concerned about how something makes its output. But most people consume based solely on how enjoyable consuming the output is. They don't care how the sausage is made.
The math behind the system someone uses - does it result in a thrill roller coaster at the table? That's all that matters. And that's not really in question with the d20.
Quote from: EOTB;1087721Gearheads are very concerned about how something makes its output. But most people consume based solely on how enjoyable consuming the output is. They don't care how the sausage is made.
The math behind the system someone uses - does it result in a thrill roller coaster at the table? That's all that matters. And that's not really in question with the d20.
I agree, actually. For example, it looks to me that D&D 5e with its "bounded accuracy" squashing down the bonuses on the d20 is probably "swingier" in play than Pathfinder but if speed of play and going with the flow of the rolled outcomes is the heart of the matter then that's fine.
Quote from: Lunamancer;1087716How do you figure? I could generate the similar probabilities using a d20. The difficulty level of the door is 17 rather than 15. To be "skilled" means having +9 rather than +5. That would be the real apples-to-apples comparison.
I chose numbers that are in line with low-level 3.x characters and old-school style small bonuses for stats.
But sure, if you have a +9 on a skill and someone else has a +0, and the DC is 17
and you roll more than 1/3 of the time you'll fail. The four people that have +0 will fail on a roll of 1-16, or 80% of the time.
When you fail, your companions have a better than even chance (59%) of succeeding at the lock that you failed at if they each make their own attempt in sequence.
When skilled people fail and unskilled people succeed, you have to ask, what does it mean to be skilled?
Now there are metrics like 'take 10' which would be the smart decision
if you know the TN, but if you suspect that a 19 isn't likely to be sufficient, you probably wouldn't do it.
Edit -
When you use 2d10 instead of a d20, you tend to encourage (slightly) more median results and discourage (somewhat significantly) extreme results. Put another way, if you have a +9 bonus you are equally likely to roll a 1 or a 20; you're equally likely to get a result of 10 or a result of 29; when you have 2d10, you are more likely to end up with a result in the middle of that range because you're more likely to roll a median value. You are 10x more likely to end up with a result of 20 than a result of 22 or 29 (the minimum or maximum).
If the two sides are just walking up to each-other & swinging, then the difference between linear and bell curve doesn't matter that much.
I mean; it matters when designing the math of the system, but it matters minimally in play so long as its done right.
HOWEVER
If a system is designed so that there are a lot of ways for PCs to add/subtract modifiers on the fly - the two have a VERY different feel.
There are many systems which allow you to add bonuses to your die roll from a pool - Action Points/Grit/Whatever - and for 1d20, so long as you aren't already hitting on a "2", adding more to your roll gives you the same extra damage on average. But if you're rolling a 3d6, you get more from each Action Point spent until you're hitting on a 10, and then you start getting diminishing returns.
The same is true for other ways to modify the roll. Such as a system with firearms and drastic ranged accuracy fall-off. With 3d6, you'd give up attacks to close the distance until you can get it so that you can at least hit on a 12 or 13, while with 1d20, closing each increment further does about the same thing. (unless there are extra rules involved - which adds complexity)
So - I'll say that I generally prefer some sort of bell curve if it's done properly and taken advantage of, but it's trickier for the designer to math it properly, so a linear roll like 1d20 is probably the safer route. (And perfectly acceptable.)
And I do NOT think that taking a 1d20 system like a D&D variant and trading it out for 2d10 or 3d6 is a good idea. You'd have to re-math so much of the system that you'd be better off starting fresh from a clean slate.
Quote from: EOTB;1087721Gearheads are very concerned about how something makes its output. But most people consume based solely on how enjoyable consuming the output is. They don't care how the sausage is made.
The math behind the system someone uses - does it result in a thrill roller coaster at the table? That's all that matters. And that's not really in question with the d20.
Yes. Then if a subset of players at the table want something a little different, find the compromise that works for everyone.
Then there is the fact the some version of D&D is already fine for a large part of our play, but in the process gives half the group their fill of d20. Some of us can play fantasy all the time, but we want variety in the systems with which we play it. Get off the roller coaster and jump on the Ferris wheel. If I'm going to use d20 in a home brew system, I might as well not bother.
I think before anyone switches to a new dice mechanic for gameplay; they should run some test scenarios first, to get an accurate representation of how it will work in practice.
I think D20 Roll Under 6 Ability Scores is the simplest mechanic I've tested and played.
I think 2d6 with small modifiers is the most fun mechanic I've tested and played.
D20 Roll High with modifiers is as common as sliced cheese. It's the industry standard.
I haven't tried 2d10, but I wouldn't avoid a game that used it. As long as it resulted in a good play experience, who cares?
Quote from: deadDMwalking;1087731I chose numbers that are in line with low-level 3.x characters and old-school style small bonuses for stats.
Sure, and that's exactly the bone of contention I was talking about. You preserved meaningless, abstract numbers when you switched up the dice. You did not preserve the probabilities--the meaning for which the stats are supposed to stand.
QuoteWhen skilled people fail and unskilled people succeed, you have to ask, what does it mean to be skilled?
Well, in true old school fashion, the probability of an unskilled character picking a lock was zero, unless the lock was absurdly easy to pick. But we could talk about attack rolls. In old school, even the unskilled do get a chance at hitting. So if you could go back to my example about hurting a guy with plate and a shield. Only 10% of attacks by unskilled people--0th levels--are successful. So out of four such attacks, there is a 35% chance that at least one of them gets a hit in. A "skilled" character--a 5th level fighter, needs a 14 to hit, and so is 5 points better than the unskilled guy, and he happens to have a 35% chance to hit.
So it does work. For a task so difficult for unskilled persons that it takes 4 of them to do the job of one "skilled" person where "skilled" means having a +5, the appropriate DC is 19. You chose a much easier one. If 5 points separates skilled from unskilled, then setting a DC 4 points too low almost puts you at the point of saying "this task doesn't really take skill to complete." It shouldn't be surprising, then, when unskilled persons can frequently succeed when skilled persons fail. The task just doesn't require a whole lot of skill.
QuoteWhen you use 2d10 instead of a d20, you tend to encourage (slightly) more median results and discourage (somewhat significantly) extreme results. Put another way, if you have a +9 bonus you are equally likely to roll a 1 or a 20; you're equally likely to get a result of 10 or a result of 29; when you have 2d10, you are more likely to end up with a result in the middle of that range because you're more likely to roll a median value. You are 10x more likely to end up with a result of 20 than a result of 22 or 29 (the minimum or maximum).
And this would be another major bone of contention. Because my response to this paragraph would be "No, you do not." Sure. The
dice produce more median numbers. But again, those numbers are meaningless abstractions. And the number generated is NOT the result. The result is pass or fail. That's where the meaningful part is. And you can't ever make a bellcurve when you only have two possible outcomes.
My overriding theme to basically everything in this thread is you have to keep straight the difference between the abstract and the actual.
Quote from: deadDMwalking;1087731Now there are metrics like 'take 10' which would be the smart decision if you know the TN, but if you suspect that a 19 isn't likely to be sufficient, you probably wouldn't do it.
Edit -
).
Players should be informed of the target numbers, - unless possibly they're making an opposed check. They should also know in advance what the consequences of failure are likely to be (this pretty much negates the need for 'fail forward')
Because if there's one thing that ought to characterise being skilled, it's knowing whether that cliff-face actually lies within your capability or not - and also between backing down because it's too risky or continuing with the knowledge that a single mistake might result in a plummet to your death.
Quote from: deadDMwalking;1087731Now there are metrics like 'take 10' which would be the smart decision if you know the TN, but if you suspect that a 19 isn't likely to be sufficient, you probably wouldn't do it.
You can also take 20, if you're not pressed for time / in the middle of combat / otherwise distracted.
Quote from: TJS;1087767Players should be informed of the target numbers, - unless possibly they're making an opposed check. They should also know in advance what the consequences of failure are likely to be (this pretty much negates the need for 'fail forward')
Because if there's one thing that ought to characterise being skilled, it's knowing whether that cliff-face actually lies within your capability or not - and also between backing down because it's too risky or continuing with the knowledge that a single mistake might result in a plummet to your death.
Why should players
know that as opposed to having a really strong hunch? (That is, something short of knowing the underlying numbers with absolute certainty, but having a reasonable guess of, "this is probably beyond a normal human and I could die if I attempt it... but I'm not a normal human.")
EDIT: And that's not saying you're wrong. I'm just curious why players should know these things. (Some systems make that an assumption, others assume just the opposite.)
The main argument I see for always giving specific numbers is, as stated, people who are trained in something have a good idea of what they can or can't do. If you're good enough to have a modifier in climbing, you've climbed enough to judge what is and is not pushing the limits of your ability.
As a two alternate compromise positions, let me suggest;
1) A PC should know when they'd be able to succeed if they took 10 on the check. If they can take 10 its something they're skilled enough at doing that its routine for them. Routine and therefore easy to recognize (the GM could even use the phrase "You've done things like this all the time.").
2) Only give specific DCs to people who are trained in the skill. A talented amateur might be good at climbing up a rocky cliff because of their natural strength, but someone with training can accurately judge just how difficult the climb will be and pick out problems that are beyond their level of skill.
The former is good for a speedy heroic game because it skips the dice rolling when it doesn't matter (routine checks when not under pressure), but keeps the dice rolling for when it matters (pressure situations and things normally beyond your ability) while leaving the more difficult DCs a little more open as to precisely how difficult they are. Its basically just extending the "Passive Perception" rules to general tasks... you notice or successfully perform anything you'd routinely notice or perform and move the game right along.
The latter is good for a campaign where you really want to emphasize the difference between trained and untrained skill use. The untrained skill user has to guess about whether they can take 10 or not (and so might roll and fail when they didn't need to) while the trained skill user just knows if they have to roll for the check or not (effectively chopping the bottom half off the probability curve and replacing it with a 100% line until it can't take 10 on something, then dropping off from there).
I'm cool with only knowing the DC being something the trained PC can do.
in general I don't see a good reason for NOT telling the PCs the DCs. What's the benefits of hiding it? Earlier in this thread someone was making the argument that the benefits of a flat distribution is knowing the odds of success. You can't do that if you don't have the information you need.
More to the point, telling the DC in advance means transparency. It's like making rolls on the table. If you know the DC in advance everyone knows the GM can't fudge afterwards to get the result they want. Which is why I said you should also know the result of failure.
If there's a 25% risk of the player falling off a cliff-face to their doom then let's have that out in the open for everyone to be committed to before the Dice are rolled. That way the player is informed and the GM isn't sitting there scratching there head going "ummm". And there's no need to 'fail forward'.
I can imagine a few scenarios where there are benefits to hiding the target. Factors that the characters do not know about, for one. (Ongoing magic that has nothing to tip the player's off; however, before throwing out, "shenanigans!" assume the players knew they were going into an area where that was possible, so there's no accusation of pulling something out of the GM's ass just to make life worse for a player.)
Most other scenarios would revolve around keeping tension up instead of devolving the game into a pure math exercise (which is not to say math isn't fun, just that once it's a
pure math exercise then it's really about beating the system and not playing the game... unless the game is about beating the system, in which case, awesome! Goal met).
QuoteMore to the point, telling the DC in advance means transparency. It's like making rolls on the table. If you know the DC in advance everyone knows the GM can't fudge afterwards to get the result they want. Which is why I said you should also know the result of failure.
I get that players owe each other and the GM transparency, as they are bound wholly by the rules of the game, but in most games, the GM owes the players no such complete transparency. Some directly state that, others do so indirectly, and only a minority that I can think of affirmatively state that the GM owes transparency back to the players (and most are either what this board would consider storygames OR they are really a board game that has some RPG elements, like Shadows over Hammerhal).
QuoteIf there's a 25% risk of the player falling off a cliff-face to their doom then let's have that out in the open for everyone to be committed to before the Dice are rolled. That way the player is informed and the GM isn't sitting there scratching there head going "ummm". And there's no need to 'fail forward'.
This doesn't solve the GM going, "ummm." It just means that the "ummm" may mean, "well, I didn't expect you to make that kind of a choice, so we're either calling this campaign because you died or I have to go back to the drawing board for a bit," or any number of other answers that are still in "ummm." One other answer could be, "You didn't know there was a friendly golden eagle nearby who saved you from death, but he can't get you up the cliff because you're too heavy!" that makes sense within setting but the GM just pulled out of his ass from an "ummm." Which would also still be a failing forward type mechanic.
Fail forward really isn't connected to this issue. It can happen with or without transparency, and the reasons given to justify fail forward still exist in this model. (Whether or not one feels those issues are valid is a different topic, and approaching saying that people who adjust their dice rolls behind the screen from time-to-time are not actually playing D&D because there's only one true way to do so, which I do not believe you're saying so please do not read that wrong.)
I'm of the camp that the GM should aspire to give consistent descriptions of things in a way that helps the players gauge the difficulty, should be on the watch for confusion on the players' part, and then the players should ask for clarification when they are unsure. If that's done reasonably well, then the players will have a fair idea of target numbers without knowing exactly *. On those rare occasions when all that fails, then either something serious happened as a consequence or it didn't. If it didn't, laugh off the adverse results of the misunderstanding. (They usually are quite hilarious.) If it was serious, retcon or otherwise mitigate the consequences as needed.
* This for me is the optimum state of the game, for a variety of reasons. I'm willing to do the occasional OOC retcon or other fix in order to get it 98%+ of the time with minimal intrusion of the math. Not least of my reasons is that players good at math will take solid information and turn it into the math in their heads, as part of their calculations of risk. The players not so good at math will not be as confused, because they'll focus on the problem as it exists from their characters' perspective.
When it comes to "fail forward" or, more accurately, roadblock situations, we basically break it down into life threatening (ex. failed Climb checks) and non-life threatening (ex. failing to notice a secret door).
For non-life threatening roadblocks failure is failure. It's a sandbox world so failing to find the Necromancer's secret lair isn't the end of the campaign; it just means he'll survive and continue to be a threat.
For life threatening ones though we remember that Hit Points are more than meat, but include stamina, morale and luck. So instead of falling to your death on a failed climb check you lose hit points as you become fatigued, lose confidence and push your luck... then try again. You still might fall to your death if you fail enough checks on the climb, but it's not a binary pass/fail anymore so if the players have any common sense they'll figure out the obstacle is beyond their skill level and turn back (I don't require checks to back down the way you came) or, worst case, stay where they are while the other PCs figure out a rescue.
Removing the binary outcome of each check, also means things like climbing or swimming across obstacles where multiple checks are required to cover the distance have more bell curve-like results.
The same applies to our d20-based reaction rolls for social encounters. Each successful check only moves the attitude one step and each additional check uses the original attitude of the subject. Going from wary to helpful takes three checks and failures either increase the DC of subsequent checks (a simple failure... maybe time to quit while you're ahead or at least haven't made it worse) or drop their attitude a stage and increase the DC of further efforts (a failure by 5 or more).
In fact, I actually can't think of many situations in my non-combat mechanics where single d20s are used to completely resolve something of any importance.
Maybe on-the-spot knowledge checks? But even there you automatically know anything a "Take 10" result would give you so any check is just to see if you ever picked up a specific piece of esoteric knowledge beyond the typical and each character in the group can make that check so it's a bell curve from the perspective of what the group knows. Also, failure rarely means anything more than the PCs having to go off what they're presently observing (so you see a big flaming brute, but you don't know what it's called or whether it's vulnerable to cold because it needs heat to survive or is resistant to cold because it's so hot that any cold attack is going to be like an ice cube tossed into a blast furnace).
You can't discount the effects of the PCs generally being in a group too. Sure, a search check to look for clues is a single check (if your passive Perception didn't spot it already), but a party of four is making four checks (if they're bothering to help) to determine the outcome.
The point is... even though the core mechanic is a 1d20, a single d20 roll rarely determines the absolute outcome of situation... which turns the outcome into something closer to what you're looking for with a 2d10 check already.
Quote from: Steven Mitchell;1087961I'm of the camp that the GM should aspire to give consistent descriptions of things in a way that helps the players gauge the difficulty, should be on the watch for confusion on the players' part, and then the players should ask for clarification when they are unsure. If that's done reasonably well, then the players will have a fair idea of target numbers without knowing exactly *.
This is usually where I tend to fall, but I vary with what the system tells me should be the standard for that system (when it does). So, pretty much in accord with this.
Quote from: Chris24601;1087966When it comes to "fail forward" or, more accurately, roadblock situations, we basically break it down into life threatening (ex. failed Climb checks) and non-life threatening (ex. failing to notice a secret door).
For non-life threatening roadblocks failure is failure. It's a sandbox world so failing to find the Necromancer's secret lair isn't the end of the campaign; it just means he'll survive and continue to be a threat.
I like that description and those two categories. They seem to encapsulate things pretty well. In a sandbox setting, for non-life threatening situations, yeah... fail forward is usually unnecessary because, "what is forward?" And in a more plot-centric setting, it's generally not a good idea to hang non-life threatening situations on a single roll if they are critical to forwarding the game. Unless, of course, you have backups and alternatives (i.e., failing forward... which can include failing utterly in this current task and the plot taking a turn no one expected because of it).
Either way seems to deal with the 1d20 vs. 2d10. If the 1d20-single-check is that critical to keeping the game moving, there may (repeat:
may) be a structural issue with the scenario that should be examined or mitigated by the GM. And I'm not sure the answer is moving to a more bell-like distribution because outside-expectation cases are not simply edge cases.
Quote from: Tanin Wulf;1087972Either way seems to deal with the 1d20 vs. 2d10. If the 1d20-single-check is that critical to keeping the game moving, there may (repeat: may) be a structural issue with the scenario that should be examined or mitigated by the GM. And I'm not sure the answer is moving to a more bell-like distribution because outside-expectation cases are not simply edge cases.
I'm pretty sure its not the answer at all. Just because you can only hit the roadblock if you roll a 2 or less, doesn't make 2d10 a cure-all. It just means that only 1-in-100 will hit the roadblock instead of 1-in-10.
As an example from a badly written Living Arcanis mod, the entire adventure was gated behind a ward that could only be bypassed with a dispel magic. The check wasn't horrible. At the level it was designed for you only needed a 10 or better. But it was contingent upon A) having a spellcaster with at least 3rd level spells in the party, B) that spellcaster actually having dispel magic prepared and C) that the spellcaster wouldn't fail his dispel magic checks.
In the case of my group, I had dispel magic prepared (three times actually), but my dice didn't cooperate and so, after blowing two of the three (and presuming there was actually more to the adventure since it was a relatively small tower you run across on a patrol) we moved on. Nope... inside the tower was the adventure so I wasted an entire Origins session on what ended up being a 20 minute session (10 of which was introductory text) with no encounters at all. No XP, no gp, no items... just wandered into the woods, found a tower and because of two bad rolls, we were done.
A bell curve wouldn't have fixed that module. A ward with a secret password that could be guessed from a take-20 search of the surroundings (or an encounter because the guys inside opened the door to come at kill the intruders and it was left open afterwards) would have fixed the module.
Quote from: Chris24601;1087995A bell curve wouldn't have fixed that module. A ward with a secret password that could be guessed from a take-20 search of the surroundings (or an encounter because the guys inside opened the door to come at kill the intruders and it was left open afterwards) would have fixed the module.
A bad module is a bad module - I get that and I'm not disagreeing.
Assuming you needed a 10 or better and we're not otherwise adjusting probabilities by reducing bonuses or what have you there is a difference with the bell curve.
On a d20, rolling 1-9 happens 45% of the time; rolling it twice in sequence happens 20.25% of the time. If it was a true '50-50', you'd flip heads twice in a row 25% of the time.
On 2d10, rolling a 9 or lower happens only 36% of the time; rolling it twice in sequence happens only 13% of the time.
Happening one in eight times is a significant difference than having it happen one in five times - while there are still some people that would get the sucky version of the abbreviated adventure, it's still fewer people overall.
Quote from: TJS;1087909in general I don't see a good reason for NOT telling the PCs the DCs. What's the benefits of hiding it? Earlier in this thread someone was making the argument that the benefits of a flat distribution is knowing the odds of success. You can't do that if you don't have the information you need.
I was the one (or one of the ones) that named more readily discerning odds of success as a benefit of a "flat distribution." However, I didn't mean that players should be able to more easily determine the odds. I'm more interested in the GM being better able to instantly know the odds for the sake of better adjudicating. A lot of the examples listed in this thread where a problem is presented seems to me be a failure to adjudicate appropriately.
That said, I do have a good reason for not wanting players to know the number they need. I don't want them to know the odds. I don't want them "solving" the game like it's a math problem. I want them to face "uncertainty" not just "risk" per Frank Knight's distinction.
In fact, I do like the take 10 and take 20 rules. And I see it as something like this.
-You roll the dice, and the unknown is how good the die roll will be. That's where things could go wrong. This is mathematical risk.
-You take 10, and the unknown is whether or not that will be enough to succeed. Underestimating the challenge is where things could go wrong. This is one form of uncertainty.
-You take 20, and you "roll" as high as you could. Where things could go wrong here is you might not have as much time as you think. There's a much greater chance of being interrupted. That's another form of uncertainty.
You don't ever get guarantees. You just get to pick your poison.
Quote from: Chris24601;1087966For life threatening ones though we remember that Hit Points are more than meat, but include stamina, morale and luck. So instead of falling to your death on a failed climb check you lose hit points as you become fatigued, lose confidence and push your luck... then try again. You still might fall to your death if you fail enough checks on the climb, but it's not a binary pass/fail anymore so if the players have any common sense they'll figure out the obstacle is beyond their skill level and turn back (I don't require checks to back down the way you came) or, worst case, stay where they are while the other PCs figure out a rescue.
Removing the binary outcome of each check, also means things like climbing or swimming across obstacles where multiple checks are required to cover the distance have more bell curve-like results.
Yes. Nothing's ever totally "binary." Until you give up or die, you can always try something else. So even "flat" distributions accumulate into some kind of curve. Even if you do die (or give up) on the first roll, that's all part of what makes the Pareto distribution. Even if the stats are such that you necessarily die on the first roll, all that means is you're on an extreme end of the curve.
But at the same time, you can always claim something is binary. If what I'm interested in knowing is "Do you kill the ogre this round?" or "Do you successfully climb this round," that is a binary, either yes or no. And in the context of the situation, that might actually be what's most important. Maybe it's time sensitive and things hinge on you getting it that round. If that's the case, you can play it out ten thousand times or ten million times with whatever convoluted "curve" dicing mechanic you wish. You'll always have x chance that the answer is "yes" and 1-x that the answer is "no", and only those two points side by side will never form a curve.
Quote from: Chris24601;1087995As an example from a badly written Living Arcanis mod, the entire adventure was gated behind a ward that could only be bypassed with a dispel magic. The check wasn't horrible. At the level it was designed for you only needed a 10 or better. But it was contingent upon A) having a spellcaster with at least 3rd level spells in the party, B) that spellcaster actually having dispel magic prepared and C) that the spellcaster wouldn't fail his dispel magic checks.
I agree with the overall point you're making. But I actually really like it when adventures has an up-front gatekeeper like this.
QuoteIn the case of my group, I had dispel magic prepared (three times actually), but my dice didn't cooperate and so, after blowing two of the three (and presuming there was actually more to the adventure since it was a relatively small tower you run across on a patrol) we moved on. Nope... inside the tower was the adventure so I wasted an entire Origins session on what ended up being a 20 minute session (10 of which was introductory text) with no encounters at all. No XP, no gp, no items... just wandered into the woods, found a tower and because of two bad rolls, we were done.
A bell curve wouldn't have fixed that module. A ward with a secret password that could be guessed from a take-20 search of the surroundings (or an encounter because the guys inside opened the door to come at kill the intruders and it was left open afterwards) would have fixed the module.
It seems to me what was needed is some more urgent reason of why you needed to be in this tower. If you really want to get in, you don't give up after just a couple of bad rolls. Especially not when you had a third dispel magic in reserve.
But also one of the things I probably would have done differently (in running or designing the adventure) is waved the check entirely for this. Not for the sake of saving the adventure or being nice or any of that noise. Just because that was sort of like "the key" to the gate. I mean, look. We're okay with sunlight killing a vampire. Or a blessed crossbow bolt killing a rakshasta. And also we're okay with the flip side. A non-magical, non-silver weapon not harming a werewolf. We don't consider this to be cheating or fudging. It's just how things work. It's part of the game. Not all parts of game are supposed to be shoe-horned into a core mechanic.
I do the same thing with searches. If you say "I search" without being specific enough, I rule it's like finding a needle in a haystack, and count that as automatic failure. If you specify your search and its exactly what, how, and where the thing you're looking for is, then I count that as automatic success. If it's somewhere in between the two extremes, where the outcome isn't so obvious to me, that's when I roll dice. It's a last resort, not a first go-to.
Quote from: deadDMwalking;1088032Assuming you needed a 10 or better and we're not otherwise adjusting probabilities by reducing bonuses or what have you there is a difference with the bell curve.
On a d20, rolling 1-9 happens 45% of the time; rolling it twice in sequence happens 20.25% of the time. If it was a true '50-50', you'd flip heads twice in a row 25% of the time.
On 2d10, rolling a 9 or lower happens only 36% of the time; rolling it twice in sequence happens only 13% of the time.
Happening one in eight times is a significant difference than having it happen one in five times - while there are still some people that would get the sucky version of the abbreviated adventure, it's still fewer people overall.
And what if you needed a 15? Or a 20? The 2d10 system would have you less likely to succeed.
It's almost like there's this thought pattern. Higher Success Rate = More Reliable. Because fewer failures. Bell Curve = More Reliable. Because less random. Since more reliable = more reliable, therefore less random = fewer failures. No one will ever come out and say that. In fact, I expect it to be disavowed instantly. Nonetheless, that's what the tone feels like when we get this sentiment of "bell curves make it all better."
I'm still old school. AD&D 1E. Where a first level fighter, who is considered a veteran, has roughly a 30% chance of success at his core competency. If you could make the dice pull towards the middle, the only thing that would become more consistent would be failure. Everything on that side of the curve does the opposite of most of the alleged benefits of the bell curve. It doesn't even live up to cutting Chris's dilemma in half, much less solving anything.
Quote from: Lunamancer;1088094And what if you needed a 15? Or a 20? The 2d10 system would have you less likely to succeed.
Yes. The point of a two dice is to discourage an extreme result and encourage a median result. Effectively, this means characters that can succeed with an average result succeed more often; it also consequently means that if you can only succeed with an extreme result it happens less often.
Quote from: Lunamancer;1088094It's almost like there's this thought pattern. Higher Success Rate = More Reliable. Because fewer failures. Bell Curve = More Reliable. Because less random. Since more reliable = more reliable, therefore less random = fewer failures. No one will ever come out and say that. In fact, I expect it to be disavowed instantly. Nonetheless, that's what the tone feels like when we get this sentiment of "bell curves make it all better."
Let me say it explicitly. The PCs are expected to succeed generally. If your players consistently need a 20 to hit, they're going to be missing a lot whether you use a d20 or 2d10. Generally, game systems tend to make success for equal foes happen on a 10+ (or 11 if they really want to divide the RNG in equal halves). In 3.x, two unarmed/unarmored humans might have their Dex bonus to AC and attack; if they are identical twins they hit on a 10+. Assuming that is the case, rolling on 2d10 increases the odds of success from roughly 1 in 2 to 2 in 3. People hit more often.
Quote from: Lunamancer;1088094I'm still old school. AD&D 1E. Where a first level fighter, who is considered a veteran, has roughly a 30% chance of success at his core competency. If you could make the dice pull towards the middle, the only thing that would become more consistent would be failure. Everything on that side of the curve does the opposite of most of the alleged benefits of the bell curve. It doesn't even live up to cutting Chris's dilemma in half, much less solving anything.
This is true. If you design every challenge to require a 15+ to succeed (30% on a d20), you'll decrease your chances of success (21% on 2d10). You're not explicitly stating it, but you are expecting your players to FAIL at their CORE COMPETENCIES more than they succeed (7 out of 10 times, in fact).
The reason some people like the distribution curve of 2d10 is it TENDS to align more with our real life expectations.
When most people do things consistently, they tend to consistently get the same result. They're not equally likely to have their best or worst day - they tend to have their average.
That may not be your cup of tea, but I think people on this thread generally understand that decreasing the likelihood of extreme results and increasing the likelihood of average results does exactly that - it makes average results more likely. That's one reason people roll stats on 3d6 and not 1d20. You end up with a 90% chance of a 7+ but only a 10% chance of a 15+. Average people cluster around the average.
Quote from: deadDMwalking;1088100The reason some people like the distribution curve of 2d10 is it TENDS to align more with our real life expectations.
I have no doubt people believe that. Whether it's true is another matter.
QuoteWhen most people do things consistently, they tend to consistently get the same result.
People don't roll dice on everything they do. You'd be surprised how consistent results can be when they don't vary.
QuoteThey're not equally likely to have their best or worst day - they tend to have their average.
When and if people are rolling dice, I assume there will be more than one check during the entire day.
QuoteThat may not be your cup of tea,
I never said anything about it not being my cup of tea. The OP posed a question, I thought pointing out there are weaknesses to using 2d10 and the benefits aren't actually as advertised was worth mentioning.
Quotebut I think people on this thread generally understand that decreasing the likelihood of extreme results and increasing the likelihood of average results does exactly that - it makes average results more likely.
Well, I can't speak to what people do or don't understand. I can only point out there is a fundamental problem here that transcends opinion. That the number generated by the dice is NOT the result on these sorts of checks. The results are pass or fail. And you can't form a curve out of just two points. The average, I suppose, would be somewhere in the middle, but pass or fail doesn't produce a middle, so there is never an average result to be had. If you want to change that, changing the dice won't do it. You'd have to add at least a third result so you have yes, no, maybe. Then we can talk about whether or not making maybes more common is a good idea and whose cup of tea that might be.
QuoteThat's one reason people roll stats on 3d6 and not 1d20. You end up with a 90% chance of a 7+ but only a 10% chance of a 15+. Average people cluster around the average.
And this is perfectly appropriate for generating these sorts of statistics. When I roll strength, I get one of 16 possible results, 3-18. You can analyze the frequencies of the various results and you can indeed find a curve if that is the pattern to the distribution. Whether or not that's desirable, that's something that can be discussed. The point is it's possible.
But if instead the game only had two possible strength ratings: Strong and Weak, you don't get a curve when you analyze those frequencies no matter how you make the determination of who is Strong and who is Weak. Two points don't produce a curve. Whether or not a curve or tendency towards average is desirable is not even on the table. The point is it's impossible. It has nothing to do with liking it or not.
That's the key of it. You need to have at least 3 results before you can even talk about bell curves or tendencies towards average results.
And I'm surprised nobody's called me on the standard check actually having four outcomes, pass, fail, crit, and fumble. But of course we already know the extremes of crit and fumble are far more rare than the more moderate pass or fail. There is a low-res bell curve to it. And on a single d20 roll much less!
Quote from: Lunamancer;1087631Technically Anselyn is actually wrong on this. So are a lot of people. Here's an example illustrating why taken directly from this thread:
If you were to graph out how rounds it takes the ogre to go down, I assure you, you will NOT end up with a bell curve distribution. At a minimum, it's going to take one hit while there is no theoretical maximum. So there's no symmetry. And that's fine, because bell curves can be perturbed. But there's also only a tail on one side.`I doubt the left side even starts to tail out. It lacks one of the key inflection points characteristic of a bell curve. This is not necessarily a bad thing. Bell curves in the real world are not as common as assumed. This example is actually more of a perturbed Pareto distribution rather than a bell curve.
[ATTACH=CONFIG]3390[/ATTACH]
That is right. If you have two outcomes, you have a binomial that only has a bell shape in the case that the probability of success is 50%.
Not to go meta on this but here is a twist that can offer another angle on the dice. The TL;DR is that the dice don't matter at all. I don't 100% agree, because I am not the same kind of GM as JW.
http://johnwickpresents.com/rants/no-dice/comment-page-1/
I completely disagree with that article because studies have shown that tactile involvement (i.e. rolling dice in this case) increases your investment and interest in any activity.
Quote from: deadDMwalking;1088100The reason some people like the distribution curve of 2d10 is it TENDS to align more with our real life expectations.
When most people do things consistently, they tend to consistently get the same result. They're not equally likely to have their best or worst day - they tend to have their average.
That may not be your cup of tea, but I think people on this thread generally understand that decreasing the likelihood of extreme results and increasing the likelihood of average results does exactly that - it makes average results more likely. That's one reason people roll stats on 3d6 and not 1d20. You end up with a 90% chance of a 7+ but only a 10% chance of a 15+. Average people cluster around the average.
Assuming you set the target numbers correctly, yes. I know you know that, but since these types of discussions bring out a need for precision, I'll include to forestall the obvious counter argument.
Also, "real life expectations" are by definition somewhat psychological, even emotional things. It is precisely at that point that the math stops being useful, and a wide experience with how people act and feel becomes more important. Thus the "tends".
One thing I've noticed is that people who tend generalist by temperament and life experiences tends to have different attitudes about this topic than those that tend specialist along the same criteria. The more skills in which you tend to "talented amateur" or "experienced amateur", and the more people you know that tend the same way (at least well enough to see their skills status), then the more you care about the mechanics producing that average most of the time.
Quote from: Chris24601;1088159I completely disagree with that article because studies have shown that tactile involvement (i.e. rolling dice in this case) increases your investment and interest in any activity.
Dice can still be tossed. The point is *that doesn't matter* for the goal he assumes.
Quote from: soundchaser;1088174Dice can still be tossed. The point is *that doesn't matter* for the goal he assumes.
Well, I think there's even a little bit more than that wrong with JW's argument.
Ultimately it's the GM who decides, therefore dice don't matter.
Well, ultimately, we all pack up our papers and pencils and dice and go home. If that means nothing that happened in the preceding four hours matters, we wouldn't have a hobby and JW wouldn't be blogging about it.
Ove. That's the thing. The dice establish the illusion. It keeps people playing, this illusion.
The game he *did* design based on his idea is roll and move.
Quote from: soundchaser;1088257Ove. That's the thing. The dice establish the illusion. It keeps people playing, this illusion.
The game he *did* design based on his idea is roll and move.
I think it is more than just an illusion to it, though. Obviously it's not real. We're talking about make-believe after all. But it still has to be real enough to keep the players playing. That's no trivial thing.