This is a site for discussing roleplaying games. Have fun doing so, but there is one major rule: do not discuss political issues that aren't directly and uniquely related to the subject of the thread and about gaming. While this site is dedicated to free speech, the following will not be tolerated: devolving a thread into unrelated political discussion, sockpuppeting (using multiple and/or bogus accounts), disrupting topics without contributing to them, and posting images that could get someone fired in the workplace (an external link is OK, but clearly mark it as Not Safe For Work, or NSFW). If you receive a warning, please take it seriously and either move on to another topic or steer the discussion back to its original RPG-related theme.

D20 versus 2d10

Started by Theory of Games, May 11, 2019, 09:52:52 PM

Previous topic - Next topic

Lunamancer

Quote from: Steven Mitchell;1087575Not really.  It's simply that most everyone is taking shortcuts in the discussion, which makes it appear to be a disconnect.  It's when you model the reality of how competent a population is at a skill that prompts the desire for a distribution with a curve instead of linear.

It's possible. But your further explanation also seems to show that disconnect, presumably without your even noticing.

QuoteIn a curve, a modifier of +1 has a different meaning at different levels of competency. This happens to model somewhat well against the curve of competence in reality. Not perfectly, but better than a linear distribution with equal modifiers.  There tends to be rapid learning at first, followed by slowing learning, followed by more and more work to eke out any significant advantage.

I don't know that's true.

First, suppose I am using a 2d10 system. Say my initial skill is such that I need to roll a 20 to succeed. My next point of skill makes me 2% better. The next one after that makes me 3% better. The next one after that makes me 4% better, and so on. It's not until I get to where I need an 11 that each skill increase comes with diminishing returns to my probability of success. Personally, I don't consider it an automatic bad thing for skills to be harder to learn at lower levels. If anything, it might deter building characters against archetype, which if your into niche/archetype protection could be a good thing. My point is, half of the curve does the exact opposite of what you describe, and what you seem to be saying is good and "realistic". If I adopt your values and preferences, I'm stuck concluding that this idea is just equally dumb as it is good.

Second, its extremely common in RPGs for successive higher skill points to cost more and more. The distribution of skills themselves isn't linear, and so there's not really a need for this to be baked into the dice rolls.

Third, even if you don't design for higher and higher costs of learning, it's highly unlikely that the collective free choices of players will result in a linear distribution.

Forth, diminishing returns happen even when you don't design them in. I often point out that 1E advancement really flat-lines after name level. A 9th level fighter has a 12 THAC0. If he's got a total of +5 in hit bonuses from a combination of strength and magic items, which most people agree is reasonable if not humble for that level (a +3 sword and 18/70 STR will do it), that means he hits AC 5 with a 2 or better on the d20. And hit probabilities don't get any better from then on. I've done a statistical analysis of the Monster Manual. The mean, median, and mode AC is 5. This includes some of the more powerful ones like a hydra or a stone golem. Against more than half the monsters, the fighter gets no better chance of hitting as he gets better. And as he does get better, the list of monsters against which his attacks strike with greater accuracy gets smaller and smaller. Even though the THAC0 (that silly nonsense abstract game stat) continues to advance at the same rate, how meaningful that is in terms of how much it actually benefits the character overall on the course of the adventure keeps diminishing.

Fifth, the crazy thing about probabilities is they're necessarily bound by 0% and 100%. As you approach either extreme, the curve goes horizontal. Which is unlike the rest of the curve. On a 2d10 system, if I need a 22 to hit, my probability is the same as if I need a 21. 0%. Or 1% if you rule a 20 is an auto success. But regardless of whether it is 0% or 1%, the curve is linear and horizontal at that point. But this also means even a linear mechanic isn't really linear, as the line has to bend near the upper and lower bounds. The dirty little secret is, regardless of the mechanic, they all roughly replicate an "S"-curve when graphing the probability of success against incremental adjustments, left-to-right lowest-to-highest probabilities for success. In the big picture, it's all the same. All we're ever doing is nitpicking the smallest differences.

QuoteWhen I say I prefer the 2d10 for this model over the 3d6 or 1d20, that's shorthand for I find that the percentage chance of success using +1 modifiers on a 2d10 maps reasonably close to the skill curve that I want, both mathematically and aesthetically.  (Not least because I want something akin to reality, but skewed to fit a particular style of fantasy.)  Furthermore, while the odds are more difficult than the d20, at least the 2d10 odds map to 1% increments with each jump.  Show a normal player a map of the results of 2d10 with the percentages for each roll, it intuitively makes sense to many of them.  (That's why I prefer the 2d10 to 2d12.  I like the spread and results of 2d12 even more, but not enough to give up the ease of understanding for most players.)  A mechanic needs to balance all of these concerns in a way pleasing to the players.

Well, unfortunately when it comes to balancing things out, 2d10 lacks something that I find really important. Any even number of dice lacks a 50/50 point. Which again, when I get away from math and concern myself more with what meaning those numbers have in the game, if I'm trying to adjudicate something that's maybe really complicated or not covered by the rules, the 50/50 point is a good starting point. In my break down of the 1E hit probabilities, the 0th level human vs AC 10 is just that. We're taking basest fighting skill against the basest defensive capacity and we're saying, well, all things equal, it's 50/50. I think the 50/50 point is vital to making sure the abstract model actually connects to actuality.
That's my two cents anyway. Carry on, crawler.

Tu ne cede malis sed contra audentior ito.

Beldar

Quote from: Anselyn;1087499Wrong.

The central limit theory says that when many random distributions contribute to an overall result then the probabilty distribution tends to a bell curve.
https://en.wikipedia.org/wiki/Central_limit_theorem

The example that I show students is data for average speed of cars in different speed limit zones. So - that's averaged over many cars (brand, model, age, conditions), many drivers (age, sex, experience) and many weather conditions.  The distributions - basically a bell curve.

I'm confused. I don't disagree with your assessment at all. Yet I fail to see how either type of dice mechanical models reality better than the other. If you average all the attack roll results out with either method you will get a distribution something like a bell curve. It has nothing to do with one being more realistic than the other. I would suggest that it is not possible for any die rolling rule to be "realistic" anyway.

Lunamancer

Quote from: Beldar;1087601I'm confused. I don't disagree with your assessment at all. Yet I fail to see how either type of dice mechanical models reality better than the other. If you average all the attack roll results out with either method you will get a distribution something like a bell curve. It has nothing to do with one being more realistic than the other. I would suggest that it is not possible for any die rolling rule to be "realistic" anyway.

Technically Anselyn is actually wrong on this. So are a lot of people. Here's an example illustrating why taken directly from this thread:

Quote from: Chris24601;1087396You could, in theory, drop the ogre with one hit; a natural 20 and a damage roll of at least 6 on the die or better will drop it. But the odds of this are very very slim (far less than the 5% of simply rolling a natural 20).

But on average, its going to take you around three hits to drop the ogre. That means rolling to hit probably about six times, but if you roll well you might need only four, or poorly you might need eight or nine rolls.

If you repeated that battle twenty times, you'd see that the overall results of the battles (whether you win, how many turns it takes to win, how much damage you take in the course of winning) will fall into a bell curve distribution because each battle takes multiple rolls to resolve.

If you were to graph out how rounds it takes the ogre to go down, I assure you, you will NOT end up with a bell curve distribution. At a minimum, it's going to take one hit while there is no theoretical maximum. So there's no symmetry. And that's fine, because bell curves can be perturbed. But there's also only a tail on one side.`I doubt the left side even starts to tail out. It lacks one of the key inflection points characteristic of a bell curve. This is not necessarily a bad thing. Bell curves in the real world are not as common as assumed. This example is actually more of a perturbed Pareto distribution rather than a bell curve.

[ATTACH=CONFIG]3390[/ATTACH]
That's my two cents anyway. Carry on, crawler.

Tu ne cede malis sed contra audentior ito.

Steven Mitchell

#33
Lunamancer, you are assuming a 2d10 system where the range starts at the bottom end.   If instead, you set the base target numbers near the middle of the curve, then a combination of difficulty and skills interacting will have the effect I described.  

As for the 50% in the middle thing, I don't miss it much.  Instead, I prefer something around the two-thirds mark.  2d10 versus base target of 10 (then adjust for mods and difficulty) sets it at 64%, close enough for me.  Also, I'm going to handle extreme edge cases via GM adjudication, not a roll.  For rolls, I'm only interested in the things that can reasonably map into the mechanic chosen.  For 2d10, that means that the bottom 2 or 3 and the top 2 or 3 parts of the range will seldom be used in practice.

deadDMwalking

The advantage to dice that produce a distribution curve (like 2d10) tends to be when you're comparing two or more individuals with different levels of ability.  

Take a party of 5 individuals.  Let's say that you're attempting to open a lock and that anyone can make the attempt against DC 15.  One member of the party is skilled (+5), and the other four are not (+0).  In this case, what are the odds that the skilled person fails but one of the unskilled people succeeds?  

We know that the skilled person needs to roll a 10 or better (55% on a d20).  45% of the time, they fail.  Assuming they rolled a 9 or lower, there is a 14/20 chance (70%) that each individual fails; and a 30% chance that they succeed.  The odds that all four fail are (.70 * .70 * .70 * .70) 24%; the odds that at least one succeeds is 76%.  

With 2d10 the 'skilled character' has a 64% of success (36% chance of failure) and each of the other characters has a 21% to succeed and a 79% chance to fail.  The odds that all four fail is 39% - only a 61% chance one of them succeeds.  

In the first case (1d20) having four unskilled people make the attempt is worth more than having one skilled person make the attempt.  In the second case, having one skilled person is worth more than having four unskilled people make the attempt.  

Even on a d20, the expert succeeds MORE than any unskilled individual, but not necessarily MORE than several unskilled individuals.
When I say objectively, I mean \'subjectively\'.  When I say literally, I mean \'figuratively\'.  
And when I say that you are a horse\'s ass, I mean that the objective truth is that you are a literal horse\'s ass.

There is nothing so useless as doing efficiently that which should not be done at all. - Peter Drucker

Stephen Tannhauser

Quote from: deadDMwalking;1087643Even on a d20, the expert succeeds MORE than any unskilled individual, but not necessarily MORE than several unskilled individuals.

Which was one of the reasons the Take 10 and Take 20 rolls were introduced -- so that performance levels at various levels of ability could be standardized when there was no drama factor about the challenge.  If you included all successful uses of Take 10 and Take 20 in your set of d20 "rolls", I suspect you'd get something much more like a bell curve.

I myself am a fan of the 2d10 bell curve simply because I like more predictability in a roll than a completely flat 1-20 distribution gives, while still liking some unpredictability.  I've gradually come to the belief that bell curves also work best when you can adjust the effort or commitment you put into a task in a way that measurably affects your chances of success -- for gambling to be a game of skill at all you have to know when and how to shift the odds in relation to the size of your stake.
Better to keep silent and be thought a fool, than to speak and remove all doubt. -- Mark Twain

STR 8 DEX 10 CON 10 INT 11 WIS 6 CHA 3

Lunamancer

Quote from: deadDMwalking;1087643Even on a d20, the expert succeeds MORE than any unskilled individual, but not necessarily MORE than several unskilled individuals.

How do you figure? I could generate the similar probabilities using a d20. The difficulty level of the door is 17 rather than 15. To be "skilled" means having +9 rather than +5. That would be the real apples-to-apples comparison. Not locking in the stats and just thoughtlessly computing the probabilities from there. I covered this in an earlier post on this thread not too long ago:

Quote from: Lunamancer;1087556If I were designing the game with 3d6 in mind, I would NOT preserve the existing THAC0's and AC's of the game. That's all just nonsense anyway. Abstract game concepts. What I really care about is what the numbers stand for. Which is to represent the best armor as repelling 90% of blows as a baseline, and no armor to be a 50/50 chance. Thus under a "d20" style system, but using a 3d6 mechanic, I'd probably give the 0th level human a +0 BAB, and the AC of platemail and a shield would be 15. Unarmored would be AC 11.

I mean it's not like if I were trying to convert something over from Vampire: the Masquerade to Call of Cthulhu that I'd look and see, "Oh, 5 dots. That must mean the same thing as 5 points of skill."

Can you make the same case placing probabilities first and then choosing stats to conform to the probabilities, allowing the stats to be different as one would expect them to be under different systems?
That's my two cents anyway. Carry on, crawler.

Tu ne cede malis sed contra audentior ito.

Anselyn

#37
Quote from: Lunamancer;1087631Technically Anselyn is actually wrong on this. So are a lot of people. Here's an example illustrating why taken directly from this thread:

I agree. I wasn't suggesting that the overall combat outcome probablilties would be a bell curve - although I agree CLT might pull you towards that conclusion and I might have been saying that.  I was thinking more of the "realism" of modelling the average probabilty to hit of a fighter given all the variables that might contribute to that.  Others have contributed on this point.  I would suggest that a bell curve will usefully and helpfully clump the result around the users standard ability/skill on some numerical scale which might be desirable.

Obviously, the idea of realism depends on modelling how the world works in the modellers view.  I have been led to believe (sorry - can't cite[1]) that giving somebody a skill typically takes someone from not being able to do something to being able to do it - at a threshold level -and then may increase ability later. Even for something like weighlifiting, which might sound like "you've got the muscles or you haven't", initial training will give a strong boost in the weights you can lift.  People don't tend to have a 50% chance of completing a test - they do jump from needing 9+ on 2d6 to  5+ on 2d6 - in Traveller terms.

Having said that (sorry - 1 academic, 3 opinions), I'll be marking 150 maths exams later this week (and next ....) and someone will probably score 50% on the exam. Now, we do give partial credit but someone on 50% probably won't have scored 50% on all questions - they will have all the marks for some questions and no marks for others, and some partial answers.  I guess that means they have a 50% ability in the maths for my course?

[1] Not Stephen Mitchell above - but a friend, sadly no longer with us, who edited a book on this topic for his publisher and then told me about it. I guess it was a sports science book.

EOTB

Gearheads are very concerned about how something makes its output.  But most people consume based solely on how enjoyable consuming the output is.  They don't care how the sausage is made.  

The math behind the system someone uses - does it result in a thrill roller coaster at the table?  That's all that matters.  And that's not really in question with the d20.
A framework for generating local politics

https://mewe.com/join/osric A MeWe OSRIC group - find an online game; share a monster, class, or spell; give input on what you\'d like for new OSRIC products.  Just don\'t 1) talk religion/politics, or 2) be a Richard

Anselyn

Quote from: EOTB;1087721Gearheads are very concerned about how something makes its output.  But most people consume based solely on how enjoyable consuming the output is.  They don't care how the sausage is made.  

The math behind the system someone uses - does it result in a thrill roller coaster at the table?  That's all that matters.  And that's not really in question with the d20.

I agree, actually.  For example, it looks to me that D&D 5e with its "bounded accuracy" squashing down the bonuses on the d20 is probably "swingier" in play than Pathfinder but if speed of play and going with the flow of the rolled outcomes is the heart of the matter then that's fine.

deadDMwalking

#40
Quote from: Lunamancer;1087716How do you figure? I could generate the similar probabilities using a d20. The difficulty level of the door is 17 rather than 15. To be "skilled" means having +9 rather than +5. That would be the real apples-to-apples comparison.

I chose numbers that are in line with low-level 3.x characters and old-school style small bonuses for stats.  

But sure, if you have a +9 on a skill and someone else has a +0, and the DC is 17 and you roll more than 1/3 of the time you'll fail.  The four people that have +0 will fail on a roll of 1-16, or 80% of the time.  When you fail, your companions have a better than even chance (59%) of succeeding at the lock that you failed at if they each make their own attempt in sequence.  

When skilled people fail and unskilled people succeed, you have to ask, what does it mean to be skilled?  

Now there are metrics like 'take 10' which would be the smart decision if you know the TN, but if you suspect that a 19 isn't likely to be sufficient, you probably wouldn't do it.

Edit -
When you use 2d10 instead of a d20, you tend to encourage (slightly) more median results and discourage (somewhat significantly) extreme results.  Put another way, if you have a +9 bonus you are equally likely to roll a 1 or a 20; you're equally likely to get a result of 10 or a result of 29; when you have 2d10, you are more likely to end up with a result in the middle of that range because you're more likely to roll a median value.  You are 10x more likely to end up with a result of 20 than a result of 22 or 29 (the minimum or maximum).
When I say objectively, I mean \'subjectively\'.  When I say literally, I mean \'figuratively\'.  
And when I say that you are a horse\'s ass, I mean that the objective truth is that you are a literal horse\'s ass.

There is nothing so useless as doing efficiently that which should not be done at all. - Peter Drucker

Charon's Little Helper

If the two sides are just walking up to each-other & swinging, then the difference between linear and bell curve doesn't matter that much.

I mean; it matters when designing the math of the system, but it matters minimally in play so long as its done right.

HOWEVER

If a system is designed so that there are a lot of ways for PCs to add/subtract modifiers on the fly - the two have a VERY different feel.

There are many systems which allow you to add bonuses to your die roll from a pool - Action Points/Grit/Whatever - and for 1d20, so long as you aren't already hitting on a "2", adding more to your roll gives you the same extra damage on average. But if you're rolling a 3d6, you get more from each Action Point spent until you're hitting on a 10, and then you start getting diminishing returns.

The same is true for other ways to modify the roll. Such as a system with firearms and drastic ranged accuracy fall-off. With 3d6, you'd give up attacks to close the distance until you can get it so that you can at least hit on a 12 or 13, while with 1d20, closing each increment further does about the same thing. (unless there are extra rules involved - which adds complexity)

So - I'll say that I generally prefer some sort of bell curve if it's done properly and taken advantage of, but it's trickier for the designer to math it properly, so a linear roll like 1d20 is probably the safer route. (And perfectly acceptable.)

And I do NOT think that taking a 1d20 system like a D&D variant and trading it out for 2d10 or 3d6 is a good idea. You'd have to re-math so much of the system that you'd be better off starting fresh from a clean slate.

Steven Mitchell

Quote from: EOTB;1087721Gearheads are very concerned about how something makes its output.  But most people consume based solely on how enjoyable consuming the output is.  They don't care how the sausage is made.  

The math behind the system someone uses - does it result in a thrill roller coaster at the table?  That's all that matters.  And that's not really in question with the d20.

Yes.  Then if a subset of players at the table want something a little different, find the compromise that works for everyone.  

Then there is the fact the some version of D&D is already fine for a large part of our play, but in the process gives half the group their fill of d20.  Some of us can play fantasy all the time, but we want variety in the systems with which we play it.  Get off the roller coaster and jump on the Ferris wheel.  If I'm going to use d20 in a home brew system, I might as well not bother.

Razor 007

I think before anyone switches to a new dice mechanic for gameplay; they should run some test scenarios first, to get an accurate representation of how it will work in practice.

I think D20 Roll Under 6 Ability Scores is the simplest mechanic I've tested and played.

I think 2d6 with small modifiers is the most fun mechanic I've tested and played.

D20 Roll High with modifiers is as common as sliced cheese.  It's the industry standard.

I haven't tried 2d10, but I wouldn't avoid a game that used it.  As long as it resulted in a good play experience, who cares?
I need you to roll a perception check.....

Lunamancer

Quote from: deadDMwalking;1087731I chose numbers that are in line with low-level 3.x characters and old-school style small bonuses for stats.

Sure, and that's exactly the bone of contention I was talking about. You preserved meaningless, abstract numbers when you switched up the dice. You did not preserve the probabilities--the meaning for which the stats are supposed to stand.

QuoteWhen skilled people fail and unskilled people succeed, you have to ask, what does it mean to be skilled?  

Well, in true old school fashion, the probability of an unskilled character picking a lock was zero, unless the lock was absurdly easy to pick. But we could talk about attack rolls. In old school, even the unskilled do get a chance at hitting. So if you could go back to my example about hurting a guy with plate and a shield. Only 10% of attacks by unskilled people--0th levels--are successful. So out of four such attacks, there is a 35% chance that at least one of them gets a hit in. A "skilled" character--a 5th level fighter, needs a 14 to hit, and so is 5 points better than the unskilled guy, and he happens to have a 35% chance to hit.

So it does work. For a task so difficult for unskilled persons that it takes 4 of them to do the job of one "skilled" person where "skilled" means having a +5, the appropriate DC is 19. You chose a much easier one. If 5 points separates skilled from unskilled, then setting a DC 4 points too low almost puts you at the point of saying "this task doesn't really take skill to complete." It shouldn't be surprising, then, when unskilled persons can frequently succeed when skilled persons fail. The task just doesn't require a whole lot of skill.

QuoteWhen you use 2d10 instead of a d20, you tend to encourage (slightly) more median results and discourage (somewhat significantly) extreme results.  Put another way, if you have a +9 bonus you are equally likely to roll a 1 or a 20; you're equally likely to get a result of 10 or a result of 29; when you have 2d10, you are more likely to end up with a result in the middle of that range because you're more likely to roll a median value.  You are 10x more likely to end up with a result of 20 than a result of 22 or 29 (the minimum or maximum).

And this would be another major bone of contention. Because my response to this paragraph would be "No, you do not." Sure. The dice produce more median numbers. But again, those numbers are meaningless abstractions. And the number generated is NOT the result. The result is pass or fail. That's where the meaningful part is. And you can't ever make a bellcurve when you only have two possible outcomes.

My overriding theme to basically everything in this thread is you have to keep straight the difference between the abstract and the actual.
That's my two cents anyway. Carry on, crawler.

Tu ne cede malis sed contra audentior ito.