If an attribute is in the average range, why bother even recording it?
I've often thought that a player should choose a handful of attributes from a longer list than the standard 6. Those are either above or below average, conferring appropriate bonuses, and nothing else matters. May be more flavourful for a character to have bonuses/penalties in strength, dexterity, courage, moral fortitude and leadership?
From an OSR perspective it shouldn't be too hard to implement?
Quote from: Fighterboy on February 15, 2022, 03:33:56 PMFrom an OSR perspective it shouldn't be too hard to implement?
I've been playing this way for about a decade now. Not only is there an advantage that you mention, but it also makes it easy to add something like "Str +2" to a monster description to easily create a special boss monsters. I works so well, I'm surprised more people don't do it
I think the best reason to record an average attribute has to do with game systems where you can enhance your attribute as the campaign continues. In other words if 9-12 is "average" (as in OD&D, for example) it makes a difference if you have a 9 or a 12 if you gain +1 to a stat. 9+1 is still average, but 12+1 is above average.
Depends what you're doing. If you're playing actual old school, the exact score can matter. 9-12 may be average, but 9 and 10 have different Bend Bars/Lift Gates probabilities in 1E, and the difference between 11 and 12 is +100 cn weight allowance. I mean, yeah, you can simplify that if you're more or less just battle bonus harvesting. But then I'd ask, why even stop there? Like why not ditch the standard attributes entirely and just have advantages/disadvantages that give bonuses or penalties in one or two areas? It seems to me the weirdest all ways to handle it is to have a standardized set of scores to be left blank when they're average.
I like real scores for the same reason I like +/- grades. Maybe a B is 80-89, but there's quite a bit of difference between 80 & 89 and I like that reflected. Drives me nuts that my own university doesn't use this system. It actually penalizes the students in multiple ways.
In d4-d4 (https://www.drivethrurpg.com/product/3197/d4d4-Main-Book?manufacturers_id=111), I used that concept. In fact I used it for skills, attributes, dis/advantages and so on.
Strength? Unless it's written down as "poor", "good" or whatever, it's "ordinary". Lockpicking? Unless it's written down as something else, it's "ordinary". How good-looking is your character? Well, unless their appearance is written down as good, poor or whatever, it's ordinary.
This made for shorter character sheets.
As for what is ordinary, anyway? that's up to the GM. If the campaign is set in the North American frontier up to 1900 or so, the ordinary person can run, shoot, build a house, start a fire with a couple of sticks and no firestarter, skin an animal and so on. If it's set in central Seattle in 2022, maybe not. On the other hand, think of reading and writing, driving a car or computer use...
I used descriptors, but you can use numbers, of course.
QuoteIf an attribute is in the average range, why bother even recording it?
Because the vast majority of people (at least AFAIK) use printed or digital character sheets that include spaces for every single attribute, whether you have an "average/0" value in it or not.
Now, if we're talking about you writing down your character or NPCs/Creatures into some sort of "shorthand" format, then yes. Noting down "average" attribute values, assuming that average values = 0, makes NO sense. And noting down ONLY the attributes you have positive or negative values on makes more sense.
I use it sometimes as a shorthand for NPCs, and I agree it would work well with monsters. In B/X, the most common stat array is +1, 0, 0, 0, 0, -1, so I usually give NPCs a +1 at something, and a -1 at something.
Quote from: Fighterboy on February 15, 2022, 03:33:56 PM
If an attribute is in the average range, why bother even recording it?
I've often thought that a player should choose a handful of attributes from a longer list than the standard 6. Those are either above or below average, conferring appropriate bonuses, and nothing else matters. May be more flavourful for a character to have bonuses/penalties in strength, dexterity, courage, moral fortitude and leadership?
From an OSR perspective it shouldn't be too hard to implement?
I just record bonuses on char-sheets. Not the actual attributes anymore.
Quote from: Shawn Driscoll on February 15, 2022, 08:38:32 PM
Quote from: Fighterboy on February 15, 2022, 03:33:56 PM
If an attribute is in the average range, why bother even recording it?
I've often thought that a player should choose a handful of attributes from a longer list than the standard 6. Those are either above or below average, conferring appropriate bonuses, and nothing else matters. May be more flavourful for a character to have bonuses/penalties in strength, dexterity, courage, moral fortitude and leadership?
From an OSR perspective it shouldn't be too hard to implement?
I just record bonuses on char-sheets. Not the actual attributes anymore.
That's how True20 did it. I don't get why it didn't catch on since it makes so much more sense
Honestly I'd be unable to play if I forcibly simplified the game like this. The addiction to complexity is just something I've come to expect. It'd be like walking into a room you frequent after all the furniture was re-arranged.
I think this is one of those places where an idea -- namely, standardized attribute bonuses -- so totally took over that it obscured what came before it.
In old-school D&D systems, anything and everything is game for making situational rules and rulings.
Does the NPC buy the PC's lies? There's no bluff check so... well, the PC's Charisma is 14, so sure.
Does the PC manage to hold the gate open while the others escape? Let them try to roll under their Strength on a d20.
Arm wresting match between PC with 9 Str and NPC will 12? That's a difference of 3, so let's give the PC a (50 - 3*5)% = 35% chance on a d100 to win.
Quote from: Cat the Bounty Smuggler on February 15, 2022, 10:10:21 PM
I think this is one of those places where an idea -- namely, standardized attribute bonuses -- so totally took over that it obscured what came before it.
In old-school D&D systems, anything and everything is game for making situational rules and rulings.
Does the NPC buy the PC's lies? There's no bluff check so... well, the PC's Charisma is 14, so sure.
Does the PC manage to hold the gate open while the others escape? Let them try to roll under their Strength on a d20.
Arm wresting match between PC with 9 Str and NPC will 12? That's a difference of 3, so let's give the PC a (50 - 3*5)% = 35% chance on a d100 to win.
Arguably, a system of a.) only recording exceptional attributes and b.) having a rather small range of numbers makes that approach easier rather than harder to implement, especially when it comes to making judgment calls on where to draw the line. To use one of your examples:
Does the NPC buy the PCs' lies? Well, Alfin's, sure, he's got Cha 14. But Berta's too? She's got Cha 12, still pretty good, but I don't know.
- Does the NPC buy the PCs' lies? Alfin's got a Cha bonus, so yeah. Berta does not, so no.
So I don't think "rulings, not rules" is any more tied to the standard attribute range than "roll a D20 and add something in all situations" is tied to the bonuses.
Quote from: Wulfhelm on February 16, 2022, 02:01:54 AM
Arguably, a system of a.) only recording exceptional attributes and b.) having a rather small range of numbers makes that approach easier rather than harder to implement, especially when it comes to making judgment calls on where to draw the line. To use one of your examples:
Does the NPC buy the PCs' lies? Well, Alfin's, sure, he's got Cha 14. But Berta's too? She's got Cha 12, still pretty good, but I don't know.
- Does the NPC buy the PCs' lies? Alfin's got a Cha bonus, so yeah. Berta does not, so no.
Sure, but note that your approach makes it binary. Berta's Cha 12 could also mean "He's suspicious, but something about Berta makes him want to trust her. He hasn't passed judgment yet." So she has a chance, but she'll have to sell it harder, taking longer.
Now, with the modern (Stat - 10)/2 approach and unified task system, then no, whether you have an odd or even score probably isn't worth keeping track of. On the other end of the spectrum, 0e and 1e have no task system and their ability scores have large "dead zones" with no or minimal differences in mechanical effects, so keeping track of exact values for use in situational rulings is almost required.
B/X can go either way, with minimal revision, so it's down to taste there.
EDIT: Hit "quote" when I meant to hit "modify" and posted another comment.
Quote from: Cat the Bounty Smuggler on February 16, 2022, 02:37:21 AM
Sure, but note that your approach makes it binary. Berta's Cha 12 could also mean "He's suspicious, but something about Berta makes him want to trust her. He hasn't passed judgment yet." So she has a chance, but she'll have to sell it harder, taking longer.
That is just a matter of granularity. Attributes could still be quantified beyond "bad, normal, good". It's relatively simple: A range of X (=15 for attributes from 3 to 18) is only useful for rulings if you want X possible outcomes. I mean, what if she had Cha 11? Or Cha 13? In my experience, that level of granularity usually doesn't apply, and even a judgment like "she has to try a little harder" makes little difference to "it works" in practice.
I rarely see a need for more than 3-5 outcomes for an in-game task, and mostly just two (failed and succeeded) suffice. According to my own experience, even in systems that actually codify something like "it takes longer" into the system (MegaTraveller's task system, for example), this factor is usually disregarded because it's not important.
Modern D20-based system try to make the relatively miniscule bonuses (in 5E more than ever) seem important and manage to get away with this illusion because of people's desire to get every advantage they can and thus overvalue small numerical increases. But it is a simple matter of fact that for 90% of all rolls, the difference between Cha 10 (+0) and Cha 15 (+2) is irrelevant. It is just obscured by having a lot of possible dice roll outcomes, even if they translate into just two situational outcomes.
Quote from: Wulfhelm on February 16, 2022, 05:28:26 AM
Modern D20-based system try to make the relatively miniscule bonuses (in 5E more than ever) seem important and manage to get away with this illusion because of people's desire to get every advantage they can and thus overvalue small numerical increases. But it is a simple matter of fact that for 90% of all rolls, the difference between Cha 10 (+0) and Cha 15 (+2) is irrelevant. It is just obscured by having a lot of possible dice roll outcomes, even if they translate into just two situational outcomes.
Depends on the range. If you have a 15% chance of failure, a +2 can reduce that to a 5% chance of failure. A 10% difference can triple your odds of survival. This comes up more than you're suggesting, because the difficulties of various rolls aren't evenly spread across the spectrum. Instead, they tend to be clustered. Positive actions like hitting tend to be more likely than not, while rolls to avoid negative effects can be quite unlikely.
In my system characters get -5 to +5 in their talents. While this naturally are things like Strength and Fitness, it also includes things like Wealth and Status (higher wealth gets higher payout on missions when hired for jobs. Using average wealth for a party of adventurers.)
negative numbers mean the characters are disadvantaged while positive advantaged towards the average human of your own sex.
If a talent is 0 (average,) it is simply not recorded on the sheet.
Also matters in which attributes/modifiers are commonly used and which ones are outliers but still meaningfully occur.
If the raw attributes go from 3 to 18, but every character starts with a minimum of 8, then the 3-7 range isn't meaningful for a player. It might as well be handled by some kind of "weakness" tag on the monsters. NPCs are, of course, a vast gray area. If you want to build in that the frail old sage really can't climb the ladder very fast and might fall, then it is back to how often that arises. Some games more than others.
Naturally, it also depends on the math of the underlying system. How well do the attributes/modifiers fit seamlessly into it so that players and even the GM can start to just use them without having to think about it consciously every time? Which is to say that they should be chosen to fit, not copied from a prior version or another game and then hammered squarely into a round hole. There's a certain amount of reverse engineering going on with attributes that fit a system well.
Personally, I really appreciate the curve and range of the early D&D -3 to +3, with the relatively rare modifiers and its natural bell curve outliers. I do like a bit more spread, though. I find the effective -1 to +5 of later D&D to be deficient in matching the system (in some ways, not all), and it makes the -1 this weird bit in the system. Plus, those modifiers are on a very different curve that is not to my taste (though OK in its own context). I also like some room for growth on attributes as characters advance in power, not least because it can make rolling for attributes more viable.
That's why I built my own system to go from -5 to +N, with the same math for frail NPCs and monsters, but with characters sitting in the -4 to +4 range, and the -4, -3, and +4 occupying the same rare space that the early D&D -3 and +3 occupy. The attributes ranging from 1 to 20+ gives room for simple attribute advancement rules that fits the curve of attributes I wanted to get--i.e. reverse engineered exactly to do that.
Later D&D "simplifying" the modifiers to +1 every 2 points of attributes is exactly a case of "Chesterton's Fence": Tearing something down without understanding its purpose. Though in this case, they've kept the frame of the fence for tradition and removed its meaning. So yes, would have been better to have dropped the attributes altogether given that first, bad step.
Quote from: Fighterboy on February 15, 2022, 03:33:56 PM
If an attribute is in the average range, why bother even recording it?
I've often thought that a player should choose a handful of attributes from a longer list than the standard 6. Those are either above or below average, conferring appropriate bonuses, and nothing else matters. May be more flavourful for a character to have bonuses/penalties in strength, dexterity, courage, moral fortitude and leadership?
From an OSR perspective it shouldn't be too hard to implement?
Nuance. Both a 9 and an 11 are average, but the 11 is greater than the 9. There will be no mechanical bonus advantage, but it can provide a better handle in role-playing the character.
Quote from: Pat on February 16, 2022, 05:58:46 AM
Depends on the range. If you have a 15% chance of failure, a +2 can reduce that to a 5% chance of failure. A 10% difference can triple your odds of survival.
In 90% of all actual cases, that won't matter. That's just a mathematical certainty. Your +2 only matters if you roll a 2 or 3. For the other 18 possible rolls, it is irrelevant.
(And of course, for a D20, that is already the extreme end of the spectrum even if you take relative chances of success as a yardstick instead.)
And also of course, if you put the life or death of a character in the hands of a single D20 roll, you want every little improvement you can get. But that doesn't mean that it's any less random. A character with a +2 advantage can not reasonably hope to regularly survive save-or-die-dangers that a character without that bonus wouldn't. (Which is why I'd argue that save-or-die is probably not very good design.)
The relative ranges of die roll results and bonuses are simply such in D20 systems that random chance is much more relevant for success than character bonuses. That is a tangent to the original discussion, but it is nonetheless reality.
For OSR purposes, a 10 strength was average. A 15 gave no mechanical advantage regarding combat or bonuses.
But do you have any idea how much stronger you look military pressing 150 beside a person who can only do 100? Come on now!!
Depends on the system... If you use roll d20 under attribute, obviously every point counts.
And players often like these small details about their PCs.
For NPCs, however, its +0s all over, unless that particular individual deserves a distinction.
Quote from: Wulfhelm on February 16, 2022, 05:28:26 AM
That is just a matter of granularity. Attributes could still be quantified beyond "bad, normal, good". It's relatively simple: A range of X (=15 for attributes from 3 to 18) is only useful for rulings if you want X possible outcomes. I mean, what if she had Cha 11? Or Cha 13? In my experience, that level of granularity usually doesn't apply, and even a judgment like "she has to try a little harder" makes little difference to "it works" in practice.
The ranges don't have to be the same for every situation, though. This NPC requires a Cha 14 to bluff, that one is more gullible and only needs an 11, and that one over there is a master spy who won't fall for anything less than Cha 17. And as @Eric Diaz said, with a roll-under or even a modified d% task, every point counts.
It's ultimately a matter of taste, of course. My point is that the old style was to use anything and everything at your disposal to make rulings, and the full 3-18 range is one of the things that can be used.
Quote from: Eric Diaz on February 16, 2022, 10:46:35 AM
Depends on the system... If you use roll d20 under attribute, obviously every point counts.
And players often like these small details about their PCs.
For NPCs, however, its +0s all over, unless that particular individual deserves a distinction.
Even better: just stat NPCs as monsters. If something is going to be relevant outside of combat, you can note it in the description.
Oh, and I thought of another reason: 0e and B/X allow you to move points around at character creation. So at least there, the actual scores matter.
Quote from: Wulfhelm on February 16, 2022, 09:06:20 AM
Quote from: Pat on February 16, 2022, 05:58:46 AM
Depends on the range. If you have a 15% chance of failure, a +2 can reduce that to a 5% chance of failure. A 10% difference can triple your odds of survival.
In 90% of all actual cases, that won't matter. That's just a mathematical certainty. Your +2 only matters if you roll a 2 or 3. For the other 18 possible rolls, it is irrelevant.
(And of course, for a D20, that is already the extreme end of the spectrum even if you take relative chances of success as a yardstick instead.)
And also of course, if you put the life or death of a character in the hands of a single D20 roll, you want every little improvement you can get. But that doesn't mean that it's any less random. A character with a +2 advantage can not reasonably hope to regularly survive save-or-die-dangers that a character without that bonus wouldn't. (Which is why I'd argue that save-or-die is probably not very good design.)
The relative ranges of die roll results and bonuses are simply such in D20 systems that random chance is much more relevant for success than character bonuses. That is a tangent to the original discussion, but it is nonetheless reality.
Not how it works. If you only fail on a 1-3 on a 20, and a +2 means that you only fail on a 1 in 20, that divides the failure rate by a factor of 3. And if that's a save or die, over the course of a campaign, lacking that +2 means you're three times more likely to die. It doesn't matter what you roll the other 95% of the time, that remains the overall effect.
Quote from: Pat on February 16, 2022, 01:18:08 PMNot how it works.
Yes how it works. For 90% of
all actual rolls, a +2 bonus didn't matter. That is a simple fact. And to be sure, something like "your chance is three times as high" is often deceptive and only psychologically relevant IRL too: If your chance to develop certain types of cancer is three times as high when living near Fukushima Daiichi (hypothetical! It isn't!) then that might scare some people, but not rational people who realize that even "three times as high" is still "extremely low."
In event, according to my experience the
perception of players if you spell out to them either "you have an 85% chance of failure" or "you have a 95% chance of failure" is going to be pretty much the same.
Or to put it differently: If, in game terms, you had either a 0.01% chance of dying or a 0.05% chance of dying, you'd naturally opt for the former if you had free choice, but your
expectation would in both cases be "Nah, I'm gonna be fine." Even if IRL, this would be perceived to be massive differences on which to hinge the functioning of entire societies.
With the range of bonuses and die rolls being what it is in D&D derivatives, luck of the dice is
way, way more important than character abilities for nearly all situations, and especially for such "save or die" type situations. That is a plain fact. To be honest it's going to be hard to discuss this with someone who can't accept that.
QuoteAnd if that's a save or die, over the course of a campaign,
If it happens only once or rarely, then it was overwhelmingly just a matter of luck. And if it happens regularly, then... it's still a matter of luck.
Quote from: Wulfhelm on February 16, 2022, 02:15:13 PM
Quote from: Pat on February 16, 2022, 01:18:08 PMNot how it works.
Yes how it works. For 90% of all actual rolls, a +2 bonus didn't matter. That is a simple fact. And to be sure, something like "your chance is three times as high" is often deceptive and only psychologically relevant IRL too: If your chance to develop certain types of cancer is three times as high when living near Fukushima Daiichi (hypothetical! It isn't!) then that might scare some people, but not rational people who realize that even "three times as high" is still "extremely low."
No, it's not how it works, and your choice of an example shows you're completely missing the concept. The chance of an individual getting cancer from Fukushima is very low, and because it's measured over a lifetime, it has a tiny marginal effect on the individual's expected lifespan. In extreme cases, it might shave off a few months. But a PC in a RPG that uses save or dies will have to make save or dies checks over the course of their career. If the chance of failing goes from 5% to 15%, then the character's expected lifespan will drop by a factor of 3. If a character normally lives for 60 sessions, they'll now have an expected lifespan of 20 sessions. That's a huge difference.
Ars Magica uses only integers for stats. That stat is the modifier. If you have no modifier then the stat is zero. When I first saw it I thought, "of course!" Unfortunately for D&D we're stuck in a weird tradition of tables and charts from Gary's time as an insurance clerk in Chicago. The way I use "zero" stats in D&D is whenever there's a tie in a contest the higher stat wins.
Quote from: AtomicPope on February 16, 2022, 07:05:27 PM
Ars Magica uses only integers for stats. That stat is the modifier. If you have no modifier then the stat is zero. When I first saw it I thought, "of course!" Unfortunately for D&D we're stuck in a weird tradition of tables and charts from Gary's time as an insurance clerk in Chicago. The way I use "zero" stats in D&D is whenever there's a tie in a contest the higher stat wins.
One argument against 0-centered stats is psychological. Shifting a -3 to +3 scale up 4 points to a 1 to 7 scale makes some people happier, because a 4 sounds better than a 0, even if the mechanics are perfectly equivalent in all other ways. This seems to be factor with 0s and negatives, and can lead to stat inflation because there's pressure to avoid the lower half of the scale.
Quote from: Fighterboy on February 15, 2022, 03:33:56 PM
If an attribute is in the average range, why bother even recording it?
I've often thought that a player should choose a handful of attributes from a longer list than the standard 6. Those are either above or below average, conferring appropriate bonuses, and nothing else matters. May be more flavourful for a character to have bonuses/penalties in strength, dexterity, courage, moral fortitude and leadership?
From an OSR perspective it shouldn't be too hard to implement?
For the same reason we have a zero in the number system.
Quote from: Pat on February 16, 2022, 07:12:10 PM
Quote from: AtomicPope on February 16, 2022, 07:05:27 PM
Ars Magica uses only integers for stats. That stat is the modifier. If you have no modifier then the stat is zero. When I first saw it I thought, "of course!" Unfortunately for D&D we're stuck in a weird tradition of tables and charts from Gary's time as an insurance clerk in Chicago. The way I use "zero" stats in D&D is whenever there's a tie in a contest the higher stat wins.
One argument against 0-centered stats is psychological. Shifting a -3 to +3 scale up 4 points to a 1 to 7 scale makes some people happier, because a 4 sounds better than a 0, even if the mechanics are perfectly equivalent in all other ways. This seems to be factor with 0s and negatives, and can lead to stat inflation because there's pressure to avoid the lower half of the scale.
Beyond this, in dice pool systems, the value tells you how many dice to roll and rolling less than 0 dice is unfun.
Quote from: Pat on February 16, 2022, 05:53:29 PM
But a PC in a RPG that uses save or dies will have to make save or dies checks over the course of their career. If the chance of failing goes from 5% to 15%, then the character's expected lifespan will drop by a factor of 3. If a character normally lives for 60 sessions, they'll now have an expected lifespan of 20 sessions.
Hm, no?
If you make a save-or-die roll once per session, as you seem to imply, a character with the +2 bonus has a <50% chance of surviving 14 sessions. Or a <50% chance of surviving 5 sessions without it.
But more to the point, neither character even in this extreme scenario, with the lowest theoretically possible chance of dying on a single D20 roll, is likely to make it through an entire campaign. And even more to the point, an extreme result is going to do them in.
If you make save-or-die scenarios, which typically don't happen at this extreme range, but are more likely to happen at differences of, say "need 7 or more to survive" and "need 5 or more to survive", a regular feature, your characters will not survive for long unless you either cheat (by means of an actual rule mechanic or otherwise) or you are unbelievably lucky.
As I said, you picked the most extreme scenario possible. If the +2 bonus, as it more typically does, shifted your chances from 13 in 20 to 15 in 20, you would not survive, on average, even two occurrences in the former case and less than three in the latter case.
But then again, these are theoretical possibilities, not actual die rolls. In
all of these cases, it is quite likely that your bonus
never ever matters even once. Do you understand that and do you understand why?
I'm really having flashbacks to a former 3.5 group of mine here. Even after I had hammered home the point that when making saves it was a considerable time saver to just roll the die
before calculating their bonus to +2 or +3 accuracy with all the buffs, even after it had been practically demonstrated to them that this almost always worked, they could not wrap their heads around the concept.
Quote from: Wulfhelm on February 17, 2022, 03:09:38 PM
Quote from: Pat on February 16, 2022, 05:53:29 PM
But a PC in a RPG that uses save or dies will have to make save or dies checks over the course of their career. If the chance of failing goes from 5% to 15%, then the character's expected lifespan will drop by a factor of 3. If a character normally lives for 60 sessions, they'll now have an expected lifespan of 20 sessions.
Hm, no?
If you make a save-or-die roll once per session, as you seem to imply, a character with the +2 bonus has a <50% chance of surviving 14 sessions. Or a <50% chance of surviving 5 sessions without it.
Some friendly advice. If you aren't actually answering the question, you're wrong. Even if your statements and your math are technically correct, if you're not addressing what's at issue, you're wrong. This dude said you'll survive 3 times as long. He spat-balled some numbers. His numbers might not be correct--pending the specifics of an unknown number of unstated variables--but the ratio you yourself ended up with came out really close to 3 to 1. So why are you even arguing here?
QuoteBut more to the point, neither character even in this extreme scenario, with the lowest theoretically possible chance of dying on a single D20 roll, is likely to make it through an entire campaign. And even more to the point, an extreme result is going to do them in.
If you make save-or-die scenarios, which typically don't happen at this extreme range, but are more likely to happen at differences of, say "need 7 or more to survive" and "need 5 or more to survive", a regular feature, your characters will not survive for long unless you either cheat (by means of an actual rule mechanic or otherwise) or you are unbelievably lucky.
Enter the unstated variables. None of this hiding behind abstraction and isolated hypotheticals. What are we actually talking about here? In my experience, the most common form of Save or Die are saves versus poison. And it's most common that a successful attack roll is needed to deliver the poison. So it's not just roll 7 or better. Maybe the snake needs to roll 17 or better just to hit. Here's another variable. What if the fighter manages to slay the snake before the snake even gets to attack? That enters initiative, hit, and damage rolls into the equation.
That makes the math a lot more complicated. But this is what actual play looks like. Now what if we're talking about Dex? That could affect the initiative score. If the fighter is fighting with two weapons, Dex could mitigate two-weapon fighting penalties. And then Dex affects the snake's chance to hit. At the end of it, you're going to find the chance of the fighter dying from a failed save is going to end up a lot lower than 5%, and the impact of the 2-point Dex adjustment is going to cut the chance down by more than half (we halve the chance just by the snake's hit roll alone).
QuoteBut then again, these are theoretical possibilities, not actual die rolls. In all of these cases, it is quite likely that your bonus never ever matters even once. Do you understand that and do you understand why?
I'm really having flashbacks to a former 3.5 group of mine here. Even after I had hammered home the point that when making saves it was a considerable time saver to just roll the die before calculating their bonus to +2 or +3 accuracy with all the buffs, even after it had been practically demonstrated to them that this almost always worked, they could not wrap their heads around the concept.
Well, continuing with my example above, the adjustment will matter in 14.25% of initiative roll pairs (chance of bumping losses to wins, losses to ties, and ties to wins), 10% of the fighter's hit rolls (of which there will be at least 3, depending on edition of the game, considering the fighter's level, potential specialization, and use of a second weapon), and 10% of the snake's hit rolls, assuming the snake survives. All totaled, the 2-point DEX adjustment is going to matter at least 43.7% of the time. Of course in actual play there could also be STR and CON bonuses to consider, as well as magical adjustments.
I completely understand what it is you're saying. It's just that we generally don't resolve situations with a single die roll, and these attributes affect more than just one thing. The totality of actual play is going to produce very different results from your theory. So I would strongly recommend less hammering and more humility.
Quote from: Wulfhelm on February 17, 2022, 03:09:38 PM
Quote from: Pat on February 16, 2022, 05:53:29 PM
But a PC in a RPG that uses save or dies will have to make save or dies checks over the course of their career. If the chance of failing goes from 5% to 15%, then the character's expected lifespan will drop by a factor of 3. If a character normally lives for 60 sessions, they'll now have an expected lifespan of 20 sessions.
Hm, no?
If you make a save-or-die roll once per session, as you seem to imply, a character with the +2 bonus has a <50% chance of surviving 14 sessions. Or a <50% chance of surviving 5 sessions without it.
But more to the point, neither character even in this extreme scenario, with the lowest theoretically possible chance of dying on a single D20 roll, is likely to make it through an entire campaign. And even more to the point, an extreme result is going to do them in.
If you make save-or-die scenarios, which typically don't happen at this extreme range, but are more likely to happen at differences of, say "need 7 or more to survive" and "need 5 or more to survive", a regular feature, your characters will not survive for long unless you either cheat (by means of an actual rule mechanic or otherwise) or you are unbelievably lucky.
As I said, you picked the most extreme scenario possible. If the +2 bonus, as it more typically does, shifted your chances from 13 in 20 to 15 in 20, you would not survive, on average, even two occurrences in the former case and less than three in the latter case.
But then again, these are theoretical possibilities, not actual die rolls. In all of these cases, it is quite likely that your bonus never ever matters even once. Do you understand that and do you understand why?
I'm really having flashbacks to a former 3.5 group of mine here. Even after I had hammered home the point that when making saves it was a considerable time saver to just roll the die before calculating their bonus to +2 or +3 accuracy with all the buffs, even after it had been practically demonstrated to them that this almost always worked, they could not wrap their heads around the concept.
Yes, I did choose the most extreme example.
Because it illustrates the point I'm making. Go back and read my post that started this whole tangent. (https://www.therpgsite.com/pen-paper-roleplaying-games-rpgs-discussion/attributes-why-quantify-the-average/msg1206810/#msg1206810) My entire point was that the effects of a bonus depends on where in the d20 range it falls.
Which you're finally acknowledging now, which I suppose is progress. Though you're also denying it, and combining it with a patronizing arrogance. So baby steps, I guess.
You are correct that most saves aren't quite that extreme. But your baseline assumptions aren't universal. For instance, you're using 5+ or 7+ as typical for saves, and that's not true across all editions. In AD&D, a 1st level fighter needs a 17+ to save vs. spells. More generally, in old school variants of D&D, saves follow a progression where characters have a small chance at success at low levels, and only improve into your range at the very highest of levels, and often not even then -- a 100th level thief in AD&D, for instance, has saves as bad as 11+ (breath weapon).
This gets more complex in third edition, where saves turn into an arms race against DCs. Whether you need a 2+ to save, or a natural 20, depends not just on a character's level, but weak vs. strong saves, DC optimization (usually involving spells), the weird effects of monster HD, and other factors. By mid level, saving against an effect that targets one of your strongest saves becomes very likely, while saving against an effect that targets one of your weakest saves becomes unlikely. And at the highest levels, this variance can easily exceed the d20 range, meaning only a natural 1 or a natural 20 matters.
D&D does tend toward extremes in that regard, particularly older editions. This also comes up with attacks in old school D&D, where characters start with a poor chance to hit many opponents, but by high levels are hitting almost all the time. More generally, across the entire RPG space, a lot of designers aim at giving PCs a good chance of success on positive actions. That means skill checks, attacks, and so on often have about a 70% chance of success. This is inverted for negative effects, like saves of various kinds. Which more closely maps to your assumptions. And while these won't all lead to a 3:1 difference in survival, if you're talking about bonuses in the -5/+5 range (not atypical for D&D) when the baseline is about a 70% chance of success, then you run into the end of the d20 spectrum fairly frequently. And when that happens, the effect of a +2 can shift fairly dramatically.
Sure it's fine, if you're using a simple system wherein the attributes cannot be modified during game play. Otherwise the distribution of normal generated stats needs to be preserved.
I don't care for systems where everything remains static to when I created the character for anything more than one shots.
Quote from: Cat the Bounty Smuggler on February 16, 2022, 12:58:46 PM
Quote from: Wulfhelm on February 16, 2022, 05:28:26 AM
That is just a matter of granularity. Attributes could still be quantified beyond "bad, normal, good". It's relatively simple: A range of X (=15 for attributes from 3 to 18) is only useful for rulings if you want X possible outcomes. I mean, what if she had Cha 11? Or Cha 13? In my experience, that level of granularity usually doesn't apply, and even a judgment like "she has to try a little harder" makes little difference to "it works" in practice.
The ranges don't have to be the same for every situation, though. This NPC requires a Cha 14 to bluff, that one is more gullible and only needs an 11, and that one over there is a master spy who won't fall for anything less than Cha 17. And as @Eric Diaz said, with a roll-under or even a modified d% task, every point counts.
It's ultimately a matter of taste, of course. My point is that the old style was to use anything and everything at your disposal to make rulings, and the full 3-18 range is one of the things that can be used.
Quote from: Eric Diaz on February 16, 2022, 10:46:35 AM
Depends on the system... If you use roll d20 under attribute, obviously every point counts.
And players often like these small details about their PCs.
For NPCs, however, its +0s all over, unless that particular individual deserves a distinction.
Even better: just stat NPCs as monsters. If something is going to be relevant outside of combat, you can note it in the description.
Oh, and I thought of another reason: 0e and B/X allow you to move points around at character creation. So at least there, the actual scores matter.
Yes, sure; 99% of NPCs do not require unique stats.
Quote from: Lunamancer on February 18, 2022, 02:16:03 AMSome friendly advice. If you aren't actually answering the question, you're wrong.
Where did you see a question in his posting? (<- That was a question.)
QuoteEnter the unstated variables. None of this hiding behind abstraction and isolated hypotheticals. What are we actually talking about here? In my experience, the most common form of Save or Die are saves versus poison. And it's most common that a successful attack roll is needed to deliver the poison. So it's not just roll 7 or better. Maybe the snake needs to roll 17 or better just to hit. Here's another variable. What if the fighter manages to slay the snake before the snake even gets to attack? That enters initiative, hit, and damage rolls into the equation.
Then the actual save-or-die roll becomes so rare that a +2 bonus on it is not likely to ever matter over the course of an entire campaign.
It is actually very simple, and remains so: Such a small bonus is irrelevant for 90% of all single rolls. To become
statistically relevant, it needs to come into play very often. If a severe in-game consequence (e.g. character death) is tied to it, and the roll comes into play very often, then even in extreme cases, but most definitely in more typical ones, said severe consequence is likely to happen sooner rather than later.
This reminds me of one former DM of mine who insisted that every combat should be designed so that it would be an extremely close affair and the party had a good chance of losing catastrophically. I told him in no uncertain terms this meant, statistically, that unless combat was going to be very rare, that would inevitably result in TPKs or near-TPKs very soon. He didn't believe me. But before long he started fudging results to avoid the very scenario I had predicted.
Quote from: Pat on February 18, 2022, 09:41:17 AMYes, I did choose the most extreme example. Because it illustrates the point I'm making.
That approach doesn't have a lot of merit when talking about practical applications. Just sayin'.
QuoteThis gets more complex in third edition, where saves turn into an arms race against DCs. Whether you need a 2+ to save, or a natural 20, depends not just on a character's level, but weak vs. strong saves,
It also depends on having a
+18 difference in saves. Not +2. You're not going to get any argument from me against a +18 bonus being relevant in almost all cases (90% of all single rolls to be exact, so the precise inversion of the situation with the +2 bonus.)
But a +2 bonus only going to be relevant every 10 rolls on average. Now there's one area of D&D derivates (including the main perp 5E) where this can and does happen: Combat, at least at higher levels. Simply because many attack rolls are usually needed to shave down a high-level entity's hit points.
(There's one other area, perception, but that is another gripe of mine in general.)
But for things which you don't roll that often, it's different. A +2 bonus on D20 becomes relevant every 10 rolls, statistically, regardless of the actual target numbers involved. So if you only use that roll 2 or 3 times over the course of a session (or campaign?) it is not likely to ever become relevant over the course of that session (or campaign.)
Small bonuses relative to the dice roll range only become relevant of you roll very often. I really don't know how I can elaborate further on this basic fact.
Quote from: Wulfhelm on February 19, 2022, 01:54:35 PM
Quote from: Pat on February 18, 2022, 09:41:17 AMYes, I did choose the most extreme example. Because it illustrates the point I'm making.
That approach doesn't have a lot of merit when talking about practical applications. Just sayin'.
I
am talking about practical applications. There are situations where someone has a 15% chance of failure, and a +2 bonus means they end up with a 5% chance of failure. It's not even particularly uncommon.
Quote from: Wulfhelm on February 19, 2022, 01:54:35 PM
QuoteThis gets more complex in third edition, where saves turn into an arms race against DCs. Whether you need a 2+ to save, or a natural 20, depends not just on a character's level, but weak vs. strong saves,
It also depends on having a +18 difference in saves. Not +2. You're not going to get any argument from me against a +18 bonus being relevant in almost all cases (90% of all single rolls to be exact, so the precise inversion of the situation with the +2 bonus.)
But a +2 bonus only going to be relevant every 10 rolls on average. Now there's one area of D&D derivates (including the main perp 5E) where this can and does happen: Combat, at least at higher levels. Simply because many attack rolls are usually needed to shave down a high-level entity's hit points.
(There's one other area, perception, but that is another gripe of mine in general.)
But for things which you don't roll that often, it's different. A +2 bonus on D20 becomes relevant every 10 rolls, statistically, regardless of the actual target numbers involved. So if you only use that roll 2 or 3 times over the course of a session (or campaign?) it is not likely to ever become relevant over the course of that session (or campaign.)
Small bonuses relative to the dice roll range only become relevant of you roll very often. I really don't know how I can elaborate further on this basic fact.
Have you ever played third edition? At high levels, an +18 difference between the saves of characters in the same party is
common. And that has nothing to do with the +2, because the 18 point difference I described is between different characters, while the +2 bonus we've been talking about is a bonus that's applied to a particular character's roll. Not even vaguely the same thing.
Your +2 is only comes up 1 in 10 times also utterly irrelevant to the points I've been making. I've been talking about how the probabilities change as the target number changes, and the effect of low probability but significant effects over time/many rolls, and how small bonuses can multiply their effect.
You're either not reading what I'm writing, or you have such a weak understanding of the topic that you can't even grasp the most basic of concepts.
This thread is about whether or not a +2 bonus in a d20 game is significant now. :o
Somewhere between "A +2 bonus could snatch you from the jaws of death!" and "a +2 bonus is completely and utterly irrelevant" the truth lies.
Fact of the matter is as mundane as a +2 bonus may sound, most characters don't have ONLY a +2 bonus. They usually have a +2 bonus from a stat, +5 from a skill, +3 from a Feat or something, plus probably some buffer effect or a magic item or something, that's already a +10 total from those three things alone, which is half the variable range in a d20. And depending on the edition and the level, characters could get way higher than that. So how many bonuses can we add to the same roll and how high can each of those individual bonuses be before it gets ridiculous? And how much room for growth do you have left if you blow everything into making attribute bonuses high from the onset?
That +2 bonus sounds low on its own, but that's usually just an extra you add on top of a bunch of other crap. And attributes are the most general abilities in the game, so they can't be too high, cuz they affect everything tied to that ability, including derived stats like HP or stuff like damage.
Quote from: VisionStorm on February 19, 2022, 08:51:32 PMSomewhere between "A +2 bonus could snatch you from the jaws of death!" and "a +2 bonus is completely and utterly irrelevant" the truth lies.
Yes. It lies with "a +2 bonus is
mostly irrelevant". Which is what I'm saying. Confronted with a situation where a d20 roll can snatch you(r character, presumably) from the jaws of death, in 90% of all such cases a +2 bonus won't have helped.
This is completely a tangent now, but one problem with this is that a lot of D20-based systems try and sell a +2 bonus as some major difference in character competence; e.g. the proficiency bonus difference between a 1st-level and a 10th-level character in 5E or the difference between an average Str 10 schlub and the Str 15 village strongman in 3.x.
Of course that is just as related to the randomness of D20- or more generally dice-based resolutions as to the specific range, of course. If you say "you need to beat DC 13 to lift the gate", the Str 10 schlub is obviously 80% as likely to do it as the village strongman ("... lift with the legs, Sir Rogar!")
If OTOH you say "you need Str 13+ to lift the gate" or possibly " you need [rolls 1D6+10] Str 13+ to lift the gate", you have a different situation and the problem vanishes. Of course the latter approach, if systematized, the latter approach also has its own randomness problems.
Quote from: VisionStorm on February 19, 2022, 08:51:32 PM
This thread is about whether or not a +2 bonus in a d20 game is significant now. :o
That's not what we've been discussing. My entire point, from the very start, is that the effect of a +2 bonus depends on where you are in the d20 range.
For instance, let's say you're a 1st level fighter in AD&D. You're fighting an opponent with reasonable armor, so you need to roll a 20 to hit. But let's say you get a +2 bonus to hit for some reason. That means you now hit on an 18, 19, or 20. Three times as often. Which means a +2 bonus results in a threefold increase in damage, over time, against opponents with that AC.
That's huge. A +2 to hit means a lot in that circumstance. It also means a lot when you have a small chance to save against a save or die effect. But the importance of a +2 diminishes dramatically, as we move away from that end of the d20 range. If you need an 12+ to hit, then a +2 means you need a 10+ hit. That means your chance to hit increases from 9 out of 20, to 11 out of 20. That's 11/9, or a 22% improvement. In the middle of the range, a +2 means a x1.22 increase in damage, over time. That's much, much smaller.
Here's effect at each point in the range:
Base roll needed to hit / Increase in average damage over time with a +2 bonus to hit
20 x3
19 x2
18 x1.67
17 x1.5
16 x1.4
15 x1.33
14 x1.29
13 x1.25
12 x1.22
11 x1.2
10 x1.18
9 x1.17
8 x1.15
7 x1.14
6 x1.13
5 x1.13
4 x1.12
3 x1.11
2 x1.11
While x3 damage is huge, look at what happens as you shift further down the table. Even a single point improvement in your chances means a +2 bonus drops from x3 to x2, which is a big drop. And further down the scale, it basically plateaus. The difference between x1.13 and x1.13 is literally a rounding error at two digits.
And note it's never 10%. A +10% shift along the spectrum never results in a +10% increase in the chances of success.
That's my point. It's useful to recognize that a +2 bonus means a lot if you're at the right end of the spectrum, but the effect drops steeply as you move away from that end.
Quote from: Pat on February 20, 2022, 04:44:43 AMThat's not what we've been discussing. My entire point, from the very start, is that the effect of a +2 bonus depends on where you are in the d20 range.
... in response to my assertion that in 90% of all actual rolls, a +2 bonus is irrelevant, which is a simple fact unaffected on where you are in the d20 range. Whether it is the difference between needing 1 vs. 3, 7 vs. 9 or 17 vs. 19 - in 90% of all these cases, the bonus will not have mattered.
So, on that front, you either a.) simply refused to accept statistical reality or b.) don't like that aspect of statistical reality and thus built a strawman from a different aspect.
But let's talk about that strawman:
QuoteFor instance, let's say you're a 1st level fighter in AD&D. You're fighting an opponent with reasonable armor, so you need to roll a 20 to hit. But let's say you get a +2 bonus to hit for some reason. That means you now hit on an 18, 19, or 20. Three times as often. Which means a +2 bonus results in a threefold increase in damage, over time, against opponents with that AC.
a.) It also means that even with the bonus, in 85% of all combat rounds you miss. So what you have here is either a boring fight with an extremely high whiff factor, especially if "I attack" is the only sensible option for your 1st level fighter or...
b.) ... you're fighting an opponent who severely outclasses you and who will very likely kill you before your +2 bonus ever becomes relevant.
c.) If on the other hand, you are fighting another 1st level (or otherwise low hp) opponent who just happens to have an AC of 0 or lower, and you also have the same amount of armor, then yes, you are likely to luck out before he does. But it would just be that: Lucking out. And for the specific example, the random damage roll is going to be much more relevant than the to-hit roll in such scenarios.
In a different scenario with higher-level characters but the same sort of to-hit probabilities: That is basically the only scenario in D&D derivative games where such a small bonus becomes a discernable advantage: Because, as I explained earlier, it is only relevant when you roll
a lot - and with high hit point totals that becomes feasible because characters no longer go down in one or two hits.
And a couple more remarks on your combat (why always combat?) example:
1.) Base roll needed to hit / Average number of combat rounds until a +2 bonus becomes relevant.
20 10
19 10
4-18 10
3 9... just kidding, it's 10
2 see below
2.) Only when the base number needed to hit is exactly 20, you statistically hit more often thanks to the bonus than you would have hit without it. If it is 19, the number is even, and for all lower scenarios, the of hits you score with the bonus increasingly outnumber the hits you needed it for.
3.) Speaking of that: If the base number is higher than 20, which is a possibility in all D&D variants I know, then the bonus becomes less relevant again. If in PF 1st, you have AC 22 and your opponents have +2 and +0 bonuses respectively, then the +2 bonus obviously has no relevance whatsoever. Similarly so when the base number is below three. So I admit wrt combat, I was incorrect in stating that for all scenarios, in 90% of all actual rolls the bonus does not matter. To correct myself: In at least 90% of all actual rolls, the +2 bonus does not matter.
4.) Speaking some more of that: In games where a "natural" 20 is an automatic critical hit, the bonus again becomes even less relevant.
In any event: Point still stands - small bonuses are rarely relevant for individual rolls, thus they become relevant statistically only when you make a lot of rolls, for which combat is the go-to example if not the only one.
OK, now that we've established that we're just talking pass each other and quibbling about statitic anomalies (or whatever term would fit without turning this into a protracted semantic discussion) at the lower or higher ends of the spectrum. What would be a way to handle rolls and ability increments that would make them feel more relevant, while still allowing some room for growth? Or is that even a possibility in d20+Mod mechanic systems (or other systems for that matter)?
Cuz it seems to me that simply making modifiers larger (for example) would just create its own share of problems, like limiting room for growth or creating its own statistical anomalies, where getting any bonus would make average difficulty tasks trivial and near-automatic success, but not getting a bonus would make any (or most) high difficulty task a whiff fest.
Also (on a separate, but related note) the issue with Strength tests in the D&D/d20 System is that they exist (or at least should exist) on a separate scale than most skill rolls, but are still handled using the same mechanics as skills despite there being no skill bonus to add on top of the attribute bonus to the roll. So it creates a circumstance where the ONE ability that gives you any possible bonuses to that specific type of roll doesn't provide a high enough bonus for it to be meaningful.
And much like combat, this is the ONLY example where this stuff normally happens*, cuz most tasks related to other attributes can potentially benefit from some skill or another to modify the base stat and give you extra bonuses to make up the difference. If you're attempting some acrobatic stunt, for example, that's clearly tied to Dexterity, but it's also reliant on your Acrobatics skill rather than just raw ability. So you'd get multiple stacking bonuses to determine your total modifier for your roll, allowing for greater differences between someone with the proper abilities (plural) for that task and someone without them.
The same could potentially apply to just about any other raw attribute roll. A knowledge related roll, for example, would clearly based on INT, but if you have no skill related to that type of knowledge, your raw INT alone shouldn't give you that much of an advantage, even though a smarter character obviously should get a higher change of recalling a bit of relevant info.
It's only on STR vs STR rolls where we get into this issue where STR is the ONLY obvious ability that would fit the roll, and a +2 bonus is supposed to represent "high" STR, which SHOULD be significant for that roll and that type of roll alone*, but barely gives you any advantage, cuz you only get your Attribute bonus alone, but the system is built under the assumption that Attribute bonuses are only an extra you add on top of other stuff (such as a Skill or Combat Bonus) to supplement it. And attribute bonus ranges reflect that assumption. So proper STR vs STR rolls all apart, cuz pure raw attribute bonuses are build on a scale that assumes you always get another modifier and your attribute bonus is just an extra you add on top. Except that a STR vs STR test is a stand alone roll where STR is the only relevant ability and every point of difference should give you a significant edge over someone who doesn't have a bonus, yet it doesn't.
*and even if someone where to come up with an example of another attribute having a similar issue (which I can't think of one, but maybe there's an example out there) I would submit that every single one of those potential examples would be outliers rather than common rolls. And probably less likely to come up than a STR vs STR roll. But the absolute VAST majority of potential task rolls in a RPG OTHER than STR vs STR (or some pure STR rolls, like breaking stuff) would have SOME hypothetical skill/combat bonus or whatever tied to it that would make up the difference when determining the ultimate range of possible modifiers you could get for the roll. It's only on raw Attribute vs Attribute (or pure attribute) rolls that this happens, and only STR vs STR where it seems that the attribute itself should also be effectively the "skill" as well. But there's no mechanic to make up for the difference.
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
Cuz it seems to me that simply making modifiers larger (for example) would just create its own share of problems, like limiting room for growth or creating its own statistical anomalies, where getting any bonus would make average difficulty tasks trivial and near-automatic success, but not getting a bonus would make any (or most) high difficulty task a whiff fest.
Yes, it seems you'd likely run into one or the other situation, depending on whether or not you also adjusted target numbers up.
Making ability score modifiers larger would certainly emphasize abilities more. Dark Sun did something like this -- but only after removing most
other modifiers and scaling the environmental obstacles as well.
Quote from: Wulfhelm on February 20, 2022, 06:20:14 AM
Quote from: Pat on February 20, 2022, 04:44:43 AMThat's not what we've been discussing. My entire point, from the very start, is that the effect of a +2 bonus depends on where you are in the d20 range.
... in response to my assertion that in 90% of all actual rolls, a +2 bonus is irrelevant, which is a simple fact unaffected on where you are in the d20 range....
Exactly. This is rooted in your failure to understand the difference between relative and absolute frequency, and that small changes in relative frequency can, over many rolls, have dramatically outsize effects. You're basing this "defense" (which isn't) on a single roll, when that's not what I've been talking about at all.
Quote from: Wulfhelm on February 20, 2022, 06:20:14 AM
QuoteFor instance, let's say you're a 1st level fighter in AD&D. You're fighting an opponent with reasonable armor, so you need to roll a 20 to hit. But let's say you get a +2 bonus to hit for some reason. That means you now hit on an 18, 19, or 20. Three times as often. Which means a +2 bonus results in a threefold increase in damage, over time, against opponents with that AC.
a.) It also means that even with the bonus, in 85% of all combat rounds you miss. So what you have here is either a boring fight with an extremely high whiff factor, especially if "I attack" is the only sensible option for your 1st level fighter or...
b.) ... you're fighting an opponent who severely outclasses you and who will very likely kill you before your +2 bonus ever becomes relevant.
c.) If on the other hand, you are fighting another 1st level (or otherwise low hp) opponent who just happens to have an AC of 0 or lower, and you also have the same amount of armor, then yes, you are likely to luck out before he does. But it would just be that: Lucking out. And for the specific example, the random damage roll is going to be much more relevant than the to-hit roll in such scenarios.
The correct answer is a). Just as you demonstrated that you're completely unfamiliar with third edition a couple posts back, you're now demonstrating you're completely unfamiliar with old school D&D, because that's how the game works. At low levels, fighting is very whiffy (I frequently use that exact word to describe the effect).
Though this isn't a flaw, it's a feature. It makes the game feel very dangerous at low levels, because a single bad roll can make things very desperate. It's an essential part of why low levels are so notoriously deadly, along with other factors like low hit points compared to the possible damage output of opponents. Contrast that with high levels, where characters have a hp buffer that allow them to survive multiple attacks, and hit much better, but the AC of their opponents hasn't changed all that much. The result of this is that high level games feel radically different than low level games, which is one of the things that makes old school D&D compelling. It's not just the same game, but with bigger numbers.
But at least you're thinking though the consequences now.
Quote from: Wulfhelm on February 20, 2022, 06:20:14 AM
In a different scenario with higher-level characters but the same sort of to-hit probabilities: That is basically the only scenario in D&D derivative games where such a small bonus becomes a discernable advantage: Because, as I explained earlier, it is only relevant when you roll a lot - and with high hit point totals that becomes feasible because characters no longer go down in one or two hits.
Except you
do roll a lot. We're talking about to hit rolls and saves. These are things that happen every game, not once or twice over the lifespan of a character. So tripling how often a fairly common subset of those results occur can have a very large effect, when they cluster at the right end of the spectrum. Which they do, in the examples I've provided.
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
OK, now that we've established that we're just talking pass each other and quibbling about statitic anomalies (or whatever term would fit without turning this into a protracted semantic discussion) at the lower or higher ends of the spectrum. What would be a way to handle rolls and ability increments that would make them feel more relevant, while still allowing some room for growth? Or is that even a possibility in d20+Mod mechanic systems (or other systems for that matter)?
The weird corner cases when rolling a d20 are because it's a flat distribution, with an equal chance to roll each number. This particular issue with be reduced if the core dice mechanic generated results that approximate a bell curve (the normal distribution). Switching to 3d6, for instance, would dramatically lessen this one problem, though it does create a new set (among them narrowing the range of opponents a party can successfully face, which causes all kinds of balance issues).
The other way to deal with it is to constrain the range. This could be explicit (autohit on 18-20), or by working the underlying math so (for example) fighters always hit about 70% of the time, or skill checks tend to succeed that frequently (not uncommon in other games, as I mentioned earlier). That way, you avoid the hockey stick at the end of the spectrum. But again it's a trade off, because you also lose things like the radically differently feel between low and high level games in old school D&D.
I think the flat d20 range is a feature of D&D, but it's useful to be aware of the consequences.
Quote from: Zalman on February 20, 2022, 10:55:54 AM
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
Cuz it seems to me that simply making modifiers larger (for example) would just create its own share of problems, like limiting room for growth or creating its own statistical anomalies, where getting any bonus would make average difficulty tasks trivial and near-automatic success, but not getting a bonus would make any (or most) high difficulty task a whiff fest.
Yes, it seems you'd likely run into one or the other situation, depending on whether or not you also adjusted target numbers up.
Making ability score modifiers larger would certainly emphasize abilities more. Dark Sun did something like this -- but only after removing most other modifiers and scaling the environmental obstacles as well.
Dark Sun basically made ability scores slightly higher* with the intent of simulating a tough world were survival was difficult. But the change only made PCs tougher, which arguably had the opposite result.
But the issue is that simply making attributes higher won't solve the problem because no matter how high you make them, an Attribute bonus alone will never be comparable to an Attribute + Skill bonus (or Combat Bonus, Save Bonus, THAC0, Proficiency Bonus, or whatever depending on the edition) in terms total numbers. Yet the way that the system is structured (particularly for 3e+) assumes that the Skill (or whatever) bonus is part of the total range of modifiers that will go into the roll, so the scale for rolls (in terms of Difficulty, as well as max modifiers attributes can reach vs skill/combat/whatever modifiers) is build around that assumption. So that if you're making a raw attribute roll (such as Strength vs Strength, or basic STR check to break or lift things) you will only get the tiny bonus that attribute can get, because the system assumes that tiny bonus normally goes on top of whatever skill/combat/game stat you're actually rolling, so it needs to be kept purposefully low (it's literally a "bonus" or extra thing, not the complete thing itself).
For most tasks this is OK, because most tasks have SOME type of relevant ability BEYOND the attribute tied to it (such as a Skill or Combat ability). So if you don't have the ability you simply get no bonus, and that's on you, cuz you're making a genuine "Untrained" check for something that does have an ability--you simply haven't trained in it. But with STR vs STR, or raw STR checks (bend bars, lift gates, break stuff), the attribute itself IS the "skill". It isn't that you lack the correct ability, but that it doesn't/shouldn't exist, cuz its an intrinsic element of the attribute. But the system's "scale" (as explained in the above paragraph) is still built around the assumption that the Difficulty/Max Modifier range is Attribute + Skill (or whatever). So a STR 15 character only has a measly +2 bonus against a STR 10 (+0) character in a STR vs STR roll, when in reality STR 15 should OWN (or at least have a significant edge over) STR 10.
The only way around this is:
- Drop Skills (whatever) and base everything around Attributes.
- Drop Attributes and base everything around Skills.
- Keep both, but fold STR** and CON** into a single "Physical Power" attribute (call it Might or Toughness, or whatever), then turn STR and CON into skills based on that attribute.
- Keep both, but make an exception for Raw Attribute checks that genuinely have NO "Skill"(or whatever) and increase their bonuses for those checks (only), perhaps by doubling or even tripling them.
Out of these options, the less intrusive for D&D would be the last one. Since the rest would take a system overhaul, but saying "Raw STR checks are x2 (or x3) your Mod" takes basically typing out that quote I just wrote.
*5d4 (5-20) instead of 3d6 (3-18); effectively a +2 bonus to all scores
**The only two attributes IMO that could truly be used "raw" and/or that have limited skills (or whatever) associated with them.
Quote from: Pat on February 20, 2022, 02:32:53 PM
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
OK, now that we've established that we're just talking pass each other and quibbling about statitic anomalies (or whatever term would fit without turning this into a protracted semantic discussion) at the lower or higher ends of the spectrum. What would be a way to handle rolls and ability increments that would make them feel more relevant, while still allowing some room for growth? Or is that even a possibility in d20+Mod mechanic systems (or other systems for that matter)?
The weird corner cases when rolling a d20 are because it's a flat distribution, with an equal chance to roll each number. This particular issue with be reduced if the core dice mechanic generated results that approximate a bell curve (the normal distribution). Switching to 3d6, for instance, would dramatically lessen this one problem, though it does create a new set (among them narrowing the range of opponents a party can successfully face, which causes all kinds of balance issues).
The other way to deal with it is to constrain the range. This could be explicit (autohit on 18-20), or by working the underlying math so (for example) fighters always hit about 70% of the time, or skill checks tend to succeed that frequently (not uncommon in other games, as I mentioned earlier). That way, you avoid the hockey stick at the end of the spectrum. But again it's a trade off, because you also lose things like the radically differently feel between low and high level games in old school D&D.
I think the flat d20 range is a feature of D&D, but it's useful to be aware of the consequences.
Yeah, I've considered using 3d6 instead of a d20 before. However, some of this stuff also depends on what type of roll you're making. For most skill rolls having only a +2 bonus from an attribute is OK, cuz you're making an unskilled check. But if that +2 bonus goes into a STR vs STR roll, as Wulfhelm mentioned in a reply to me earlier (quoted below), that +2 bonus will still not be much in a 3d6+Mod system (though, it would suck less). That's where my STR vs STR tangent in the last two posts comes from.
Quote from: Wulfhelm on February 20, 2022, 03:53:02 AM
Quote from: VisionStorm on February 19, 2022, 08:51:32 PMSomewhere between "A +2 bonus could snatch you from the jaws of death!" and "a +2 bonus is completely and utterly irrelevant" the truth lies.
Yes. It lies with "a +2 bonus is mostly irrelevant". Which is what I'm saying. Confronted with a situation where a d20 roll can snatch you(r character, presumably) from the jaws of death, in 90% of all such cases a +2 bonus won't have helped.
This is completely a tangent now, but one problem with this is that a lot of D20-based systems try and sell a +2 bonus as some major difference in character competence; e.g. the proficiency bonus difference between a 1st-level and a 10th-level character in 5E or the difference between an average Str 10 schlub and the Str 15 village strongman in 3.x.
Of course that is just as related to the randomness of D20- or more generally dice-based resolutions as to the specific range, of course. If you say "you need to beat DC 13 to lift the gate", the Str 10 schlub is obviously 80% as likely to do it as the village strongman ("... lift with the legs, Sir Rogar!")
If OTOH you say "you need Str 13+ to lift the gate" or possibly " you need [rolls 1D6+10] Str 13+ to lift the gate", you have a different situation and the problem vanishes. Of course the latter approach, if systematized, the latter approach also has its own randomness problems.
Quote from: Pat on February 20, 2022, 02:17:16 PMThe correct answer is a). Just as you demonstrated that you're completely unfamiliar with third edition a couple posts back,
I've played 3rd edition to death, I know you can, if you really try, rig the combat system (you've given up on making your point, whatever it's supposed to be, for anything outside combat, right?) to produce such "only on a 20" scenarios, and I also know that that is just one of several reasons why it's a shit system.
Quoteyou're now demonstrating you're completely unfamiliar with old school D&D, because that's how the game works. At low levels, fighting is very whiffy (I frequently use that exact word to describe the effect).
I am indeed mostly unfamiliar with old school D&D in actual play, because who could stand playing that for long, but what you're saying here just makes my point: It's a random system where you just die a lot. Being "whiffy" and being deadly (which the ratio of damage to hp inevitably means) at the same time means: You die a lot due to random chance.
QuoteThough this isn't a flaw, it's a feature. It makes the game feel very dangerous at low levels, because a single bad roll can make things very desperate. It's an essential part of why low levels are so notoriously deadly,
It doesn't make it desperate, it makes you dead. It just doesn't make your piddly +2 bonuses any more relevant, because you're just gonna die before you ever benefit from them.
Quote from: VisionStorm on February 20, 2022, 02:53:33 PM
Yeah, I've considered using 3d6 instead of a d20 before. However, some of this stuff also depends on what type of roll you're making. For most skill rolls having only a +2 bonus from an attribute is OK, cuz you're making an unskilled check. But if that +2 bonus goes into a STR vs STR roll, as Wulfhelm mentioned in a reply to me earlier (quoted below), that +2 bonus will still not be much in a 3d6+Mod system (though, it would suck less). That's where my STR vs STR tangent in the last two posts comes from.
I've crunched some numbers on that myself, and found that "bell curve" distributions, aka multiple dice, are ess relevant than a lower die roll range. In the case I examined, it was 1d10 vs 2d6. A single, "flat" die with a lower range is actually going to make modifiers more relevant than a multi-dice roll with a higher range.
One solution I once suggested to someone who thought (as demonstrated, with good reason) that 5E was too random was: Just use a d10 instead of a d20, and reduce all TNs etc. by 5.
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
It's only on STR vs STR rolls where we get into this issue where STR is the ONLY obvious ability that would fit the roll, and a +2 bonus is supposed to represent "high" STR, which SHOULD be significant for that roll and that type of roll alone*, but barely gives you any advantage, cuz you only get your Attribute bonus alone, but the system is built under the assumption that Attribute bonuses are only an extra you add on top of other stuff (such as a Skill or Combat Bonus) to supplement it. And attribute bonus ranges reflect that assumption. So proper STR vs STR rolls all apart, cuz pure raw attribute bonuses are build on a scale that assumes you always get another modifier and your attribute bonus is just an extra you add on top. Except that a STR vs STR test is a stand alone roll where STR is the only relevant ability and every point of difference should give you a significant edge over someone who doesn't have a bonus, yet it doesn't.
It is not only STR vs STR rolls where this happens, but it is one of the clearest, most obvious cases where it is completely out of whack with reality. In many of the other circumstances, you can talk yourself into it not being out of whack for a particular test, or even invent the relevant "skill" to sit on top of the attribute, at least in isolation. It's only when looking at all the skills as a group that the other cases begin to stand out. It's because a game model can't handle the complexity of how ability works.
Take languages, literacy, and other nuances of communication for example. Not looking too hard at what is modeled, you can sort of slip by with a Persuade skill or the like, maybe with some GM adjudication for having a language checked. But in real life, "persuasion" is a heck of a lot more like an arm wrestling contest in modeling terms than it is some of the other skills. It's very much done on a curve--maybe not as steep as a bell curve, and maybe not exactly as the curve of a d20 + attribute + skill versus a similar construct (instead of static DC), but closer to one of those than what a d20 variance gives. We just smooth it out in our minds and assume that some people and situations are a lot harder than others, and live with the model mismatch. It's harder to do that with arm wrestling. So that is one of the first cases that arises in that discussion.
Swimming is another physical skill that can provoke that. Some games actually address it: Instead of swimming in too much armor giving a penalty or the like, they simply say it can't be done. That constrains the rolls back to a model where they can kind of fit. Arguably, that's an answer for some STR vs STR tests, too. Beyond a certain difference, the higher STR simply wins.
For any game, the game model
must make compromises to keep the game playable. It's inescapable. Thus the art is in zeroing in on the part of the subject matter most relevant to the game and making the model a good fit there, and then isolating the outliers with whatever means are necessary. This is why generic universal systems aren't.
Quote from: Wulfhelm on February 20, 2022, 05:11:59 PM
Quote from: Pat on February 20, 2022, 02:17:16 PMThe correct answer is a). Just as you demonstrated that you're completely unfamiliar with third edition a couple posts back,
I've played 3rd edition to death, I know you can, if you really try, rig the combat system (you've given up on making your point, whatever it's supposed to be, for anything outside combat, right?) to produce such "only on a 20" scenarios, and I also know that that is just one of several reasons why it's a shit system.
You don't even have to try. It just sort of happens at mid to high levels, and by epic levels you have to fight against the system to stop it.
Quote from: Wulfhelm on February 20, 2022, 05:11:59 PM
Quoteyou're now demonstrating you're completely unfamiliar with old school D&D, because that's how the game works. At low levels, fighting is very whiffy (I frequently use that exact word to describe the effect).
I am indeed mostly unfamiliar with old school D&D in actual play, because who could stand playing that for long, but what you're saying here just makes my point: It's a random system where you just die a lot. Being "whiffy" and being deadly (which the ratio of damage to hp inevitably means) at the same time means: You die a lot due to random chance.
QuoteThough this isn't a flaw, it's a feature. It makes the game feel very dangerous at low levels, because a single bad roll can make things very desperate. It's an essential part of why low levels are so notoriously deadly,
It doesn't make it desperate, it makes you dead. It just doesn't make your piddly +2 bonuses any more relevant, because you're just gonna die before you ever benefit from them.
Oh, you're one of those. The kind of people who think your personal preferences and limited experiences are a universal truth and anybody who has difference experiences or likes different things is objectively wrong.
Quote from: Wulfhelm on February 19, 2022, 01:25:29 PM
Where did you see a question in his posting? (<- That was a question.)
Well, the question in the subject line is "Why quantify the average?" and that renders literally everything you're saying moot. But you were responding to someone specifically, and I already pointed out what the issue was and how you were ignoring it. And you full well know that. Well enough to have clipped that part out when quoting me, anyway.
QuoteThen the actual save-or-die roll becomes so rare that a +2 bonus on it is not likely to ever matter over the course of an entire campaign.
Yes. That was kind of my point. You're hopped up on just one method of analysis that's causing you to miss the big picture. You're erroneously assuming the question of how much the +2 adjustment matters in a d20 system hinges on the one isolated save itself, and not the full context surrounding it.
It's not like we just show up every week and the DM is like, "Okay, save or die, everybody. You need a 4 or better. Hope you've got a +2 adjustment." There's a whole sequence of events that lead up to the save where the +2 adjustment will likely apply to multiple rolls. The chances of going from point A to point B without that +2 adjustment making a difference on any of the rolls becomes vanishingly small. I cut your 90% down to around half in just one single round of combat.
QuoteIt is actually very simple, and remains so: Such a small bonus is irrelevant for 90% of all single rolls. To become statistically relevant, it needs to come into play very often. If a severe in-game consequence (e.g. character death) is tied to it, and the roll comes into play very often, then even in extreme cases, but most definitely in more typical ones, said severe consequence is likely to happen sooner rather than later.
Okay. So I follow where you've fallen back from "+2 doesn't matter 90% of the time" to "+2 doesn't matter 90% of the time for any single die roll." But when did you smuggle in the assumption that if we analyze multiple dice rolls, all these dice rolls have to be equally, homogeneously, life-or-death checks? In the snake example I used, the additional rolls were opportunities to head off ever having to face the life-or-death roll. This doesn't conform to your assumption, and as a consequence the conclusion is the exact opposite of what you're saying. The more additional rolls where the +2 has an opportunity to matter, the LESS likely it is to result in ultimate death.
Quote from: Wulfhelm on February 20, 2022, 05:18:26 PM
Quote from: VisionStorm on February 20, 2022, 02:53:33 PM
Yeah, I've considered using 3d6 instead of a d20 before. However, some of this stuff also depends on what type of roll you're making. For most skill rolls having only a +2 bonus from an attribute is OK, cuz you're making an unskilled check. But if that +2 bonus goes into a STR vs STR roll, as Wulfhelm mentioned in a reply to me earlier (quoted below), that +2 bonus will still not be much in a 3d6+Mod system (though, it would suck less). That's where my STR vs STR tangent in the last two posts comes from.
I've crunched some numbers on that myself, and found that "bell curve" distributions, aka multiple dice, are ess relevant than a lower die roll range. In the case I examined, it was 1d10 vs 2d6. A single, "flat" die with a lower range is actually going to make modifiers more relevant than a multi-dice roll with a higher range.
One solution I once suggested to someone who thought (as demonstrated, with good reason) that 5E was too random was: Just use a d10 instead of a d20, and reduce all TNs etc. by 5.
That sounds like an interesting approach, though, I wonder if it would take things too far in the opposite direction and make modifiers too significant. If was going to try something like that, I'd probably go 2d6 instead of 1d10 and worry about the math later, just cuz I like d6s more than d10s for esthetic reasons. ;D
Quote from: Steven Mitchell on February 20, 2022, 05:30:10 PM
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
It's only on STR vs STR rolls where we get into this issue where STR is the ONLY obvious ability that would fit the roll, and a +2 bonus is supposed to represent "high" STR, which SHOULD be significant for that roll and that type of roll alone*, but barely gives you any advantage, cuz you only get your Attribute bonus alone, but the system is built under the assumption that Attribute bonuses are only an extra you add on top of other stuff (such as a Skill or Combat Bonus) to supplement it. And attribute bonus ranges reflect that assumption. So proper STR vs STR rolls all apart, cuz pure raw attribute bonuses are build on a scale that assumes you always get another modifier and your attribute bonus is just an extra you add on top. Except that a STR vs STR test is a stand alone roll where STR is the only relevant ability and every point of difference should give you a significant edge over someone who doesn't have a bonus, yet it doesn't.
It is not only STR vs STR rolls where this happens, but it is one of the clearest, most obvious cases where it is completely out of whack with reality. In many of the other circumstances, you can talk yourself into it not being out of whack for a particular test, or even invent the relevant "skill" to sit on top of the attribute, at least in isolation. It's only when looking at all the skills as a group that the other cases begin to stand out. It's because a game model can't handle the complexity of how ability works.
Take languages, literacy, and other nuances of communication for example. Not looking too hard at what is modeled, you can sort of slip by with a Persuade skill or the like, maybe with some GM adjudication for having a language checked. But in real life, "persuasion" is a heck of a lot more like an arm wrestling contest in modeling terms than it is some of the other skills. It's very much done on a curve--maybe not as steep as a bell curve, and maybe not exactly as the curve of a d20 + attribute + skill versus a similar construct (instead of static DC), but closer to one of those than what a d20 variance gives. We just smooth it out in our minds and assume that some people and situations are a lot harder than others, and live with the model mismatch. It's harder to do that with arm wrestling. So that is one of the first cases that arises in that discussion.
Swimming is another physical skill that can provoke that. Some games actually address it: Instead of swimming in too much armor giving a penalty or the like, they simply say it can't be done. That constrains the rolls back to a model where they can kind of fit. Arguably, that's an answer for some STR vs STR tests, too. Beyond a certain difference, the higher STR simply wins.
For any game, the game model must make compromises to keep the game playable. It's inescapable. Thus the art is in zeroing in on the part of the subject matter most relevant to the game and making the model a good fit there, and then isolating the outliers with whatever means are necessary. This is why generic universal systems aren't.
Yeah, I will concede that skill rolls are not always perfect, but they tend to be close enough in the vast majority of situations for purposes of "It's a game!" to work with them, even if I have to squint sometimes to see it. Cuz as you mentioned, game rules are models where compromises have to be made, and they're inescapable. So I don't worry about ability rolls being perfect, but just close enough to model what we're trying to illustrate in the game. I can always add exceptions later, or guidelines for when a skill check can't be made or isn't necessary, etc.
Like for example, if characters know a language I don't force them to make a skill check unless communicating complex concepts or they're speaking with someone with a different dialect or something. Just having X total modifier in the language is enough to skip the roll, unless something unusual happens.