This is a site for discussing roleplaying games. Have fun doing so, but there is one major rule: do not discuss political issues that aren't directly and uniquely related to the subject of the thread and about gaming. While this site is dedicated to free speech, the following will not be tolerated: devolving a thread into unrelated political discussion, sockpuppeting (using multiple and/or bogus accounts), disrupting topics without contributing to them, and posting images that could get someone fired in the workplace (an external link is OK, but clearly mark it as Not Safe For Work, or NSFW). If you receive a warning, please take it seriously and either move on to another topic or steer the discussion back to its original RPG-related theme.

Attributes - why quantify the average?

Started by Fighterboy, February 15, 2022, 03:33:56 PM

Previous topic - Next topic

Zalman

Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
Cuz it seems to me that simply making modifiers larger (for example) would just create its own share of problems, like limiting room for growth or creating its own statistical anomalies, where getting any bonus would make average difficulty tasks trivial and near-automatic success, but not getting a bonus would make any (or most) high difficulty task a whiff fest.

Yes, it seems you'd likely run into one or the other situation, depending on whether or not you also adjusted target numbers up.

Making ability score modifiers larger would certainly emphasize abilities more. Dark Sun did something like this -- but only after removing most other modifiers and scaling the environmental obstacles as well.
Old School? Back in my day we just called it "School."

Pat

Quote from: Wulfhelm on February 20, 2022, 06:20:14 AM
Quote from: Pat on February 20, 2022, 04:44:43 AMThat's not what we've been discussing. My entire point, from the very start, is that the effect of a +2 bonus depends on where you are in the d20 range.
... in response to my assertion that in 90% of all actual rolls, a +2 bonus is irrelevant, which is a simple fact unaffected on where you are in the d20 range....
Exactly. This is rooted in your failure to understand the difference between relative and absolute frequency, and that small changes in relative frequency can, over many rolls, have dramatically outsize effects. You're basing this "defense" (which isn't) on a single roll, when that's not what I've been talking about at all.

Quote from: Wulfhelm on February 20, 2022, 06:20:14 AM
QuoteFor instance, let's say you're a 1st level fighter in AD&D. You're fighting an opponent with reasonable armor, so you need to roll a 20 to hit. But let's say you get a +2 bonus to hit for some reason. That means you now hit on an 18, 19, or 20. Three times as often. Which means a +2 bonus results in a threefold increase in damage, over time, against opponents with that AC.

a.) It also means that even with the bonus, in 85% of all combat rounds you miss. So what you have here is either a boring fight with an extremely high whiff factor, especially if "I attack" is the only sensible option for your 1st level fighter or...
b.) ... you're fighting an opponent who severely outclasses you and who will very likely kill you before your +2 bonus ever becomes relevant.
c.) If on the other hand, you are fighting another 1st level (or otherwise low hp) opponent who just happens to have an AC of 0 or lower, and you also have the same amount of armor, then yes, you are likely to luck out before he does. But it would just be that: Lucking out. And for the specific example, the random damage roll is going to be much more relevant than the to-hit roll in such scenarios.
The correct answer is a). Just as you demonstrated that you're completely unfamiliar with third edition a couple posts back, you're now demonstrating you're completely unfamiliar with old school D&D, because that's how the game works. At low levels, fighting is very whiffy (I frequently use that exact word to describe the effect).

Though this isn't a flaw, it's a feature. It makes the game feel very dangerous at low levels, because a single bad roll can make things very desperate. It's an essential part of why low levels are so notoriously deadly, along with other factors like low hit points compared to the possible damage output of opponents. Contrast that with high levels, where characters have a hp buffer that allow them to survive multiple attacks, and hit much better, but the AC of their opponents hasn't changed all that much. The result of this is that high level games feel radically different than low level games, which is one of the things that makes old school D&D compelling. It's not just the same game, but with bigger numbers.

But at least you're thinking though the consequences now.

Quote from: Wulfhelm on February 20, 2022, 06:20:14 AM
In a different scenario with higher-level characters but the same sort of to-hit probabilities: That is basically the only scenario in D&D derivative games where such a small bonus becomes a discernable advantage: Because, as I explained earlier, it is only relevant when you roll a lot - and with high hit point totals that becomes feasible because characters no longer go down in one or two hits.
Except you do roll a lot. We're talking about to hit rolls and saves. These are things that happen every game, not once or twice over the lifespan of a character. So tripling how often a fairly common subset of those results occur can have a very large effect, when they cluster at the right end of the spectrum. Which they do, in the examples I've provided.


Pat

Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
OK, now that we've established that we're just talking pass each other and quibbling about statitic anomalies (or whatever term would fit without turning this into a protracted semantic discussion) at the lower or higher ends of the spectrum. What would be a way to handle rolls and ability increments that would make them feel more relevant, while still allowing some room for growth? Or is that even a possibility in d20+Mod mechanic systems (or other systems for that matter)?
The weird corner cases when rolling a d20 are because it's a flat distribution, with an equal chance to roll each number. This particular issue with be reduced if the core dice mechanic generated results that approximate a bell curve (the normal distribution). Switching to 3d6, for instance, would dramatically lessen this one problem, though it does create a new set (among them narrowing the range of opponents a party can successfully face, which causes all kinds of balance issues).

The other way to deal with it is to constrain the range. This could be explicit (autohit on 18-20), or by working the underlying math so (for example) fighters always hit about 70% of the time, or skill checks tend to succeed that frequently (not uncommon in other games, as I mentioned earlier). That way, you avoid the hockey stick at the end of the spectrum. But again it's a trade off, because you also lose things like the radically differently feel between low and high level games in old school D&D.

I think the flat d20 range is a feature of D&D, but it's useful to be aware of the consequences.



VisionStorm

Quote from: Zalman on February 20, 2022, 10:55:54 AM
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
Cuz it seems to me that simply making modifiers larger (for example) would just create its own share of problems, like limiting room for growth or creating its own statistical anomalies, where getting any bonus would make average difficulty tasks trivial and near-automatic success, but not getting a bonus would make any (or most) high difficulty task a whiff fest.

Yes, it seems you'd likely run into one or the other situation, depending on whether or not you also adjusted target numbers up.

Making ability score modifiers larger would certainly emphasize abilities more. Dark Sun did something like this -- but only after removing most other modifiers and scaling the environmental obstacles as well.

Dark Sun basically made ability scores slightly higher* with the intent of simulating a tough world were survival was difficult. But the change only made PCs tougher, which arguably had the opposite result.

But the issue is that simply making attributes higher won't solve the problem because no matter how high you make them, an Attribute bonus alone will never be comparable to an Attribute + Skill bonus (or Combat Bonus, Save Bonus, THAC0, Proficiency Bonus, or whatever depending on the edition) in terms total numbers. Yet the way that the system is structured (particularly for 3e+) assumes that the Skill (or whatever) bonus is part of the total range of modifiers that will go into the roll, so the scale for rolls (in terms of Difficulty, as well as max modifiers attributes can reach vs skill/combat/whatever modifiers) is build around that assumption. So that if you're making a raw attribute roll (such as Strength vs Strength, or basic STR check to break or lift things) you will only get the tiny bonus that attribute can get, because the system assumes that tiny bonus normally goes on top of whatever skill/combat/game stat you're actually rolling, so it needs to be kept purposefully low (it's literally a "bonus" or extra thing, not the complete thing itself).

For most tasks this is OK, because most tasks have SOME type of relevant ability BEYOND the attribute tied to it (such as a Skill or Combat ability). So if you don't have the ability you simply get no bonus, and that's on you, cuz you're making a genuine "Untrained" check for something that does have an ability--you simply haven't trained in it. But with STR vs STR, or raw STR checks (bend bars, lift gates, break stuff), the attribute itself IS the "skill". It isn't that you lack the correct ability, but that it doesn't/shouldn't exist, cuz its an intrinsic element of the attribute. But the system's "scale" (as explained in the above paragraph) is still built around the assumption that the Difficulty/Max Modifier range is Attribute + Skill (or whatever). So a STR 15 character only has a measly +2 bonus against a STR 10 (+0) character in a STR vs STR roll, when in reality STR 15 should OWN (or at least have a significant edge over) STR 10.

The only way around this is:


  • Drop Skills (whatever) and base everything around Attributes.
  • Drop Attributes and base everything around Skills.
  • Keep both, but fold STR** and CON** into a single "Physical Power" attribute (call it Might or Toughness, or whatever), then turn STR and CON into skills based on that attribute.
  • Keep both, but make an exception for Raw Attribute checks that genuinely have NO "Skill"(or whatever) and increase their bonuses for those checks (only), perhaps by doubling or even tripling them.
Out of these options, the less intrusive for D&D would be the last one. Since the rest would take a system overhaul, but saying "Raw STR checks are x2 (or x3) your Mod" takes basically typing out that quote I just wrote.

*5d4 (5-20) instead of 3d6 (3-18); effectively a +2 bonus to all scores
**The only two attributes IMO that could truly be used "raw" and/or that have limited skills (or whatever) associated with them.

VisionStorm

Quote from: Pat on February 20, 2022, 02:32:53 PM
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
OK, now that we've established that we're just talking pass each other and quibbling about statitic anomalies (or whatever term would fit without turning this into a protracted semantic discussion) at the lower or higher ends of the spectrum. What would be a way to handle rolls and ability increments that would make them feel more relevant, while still allowing some room for growth? Or is that even a possibility in d20+Mod mechanic systems (or other systems for that matter)?
The weird corner cases when rolling a d20 are because it's a flat distribution, with an equal chance to roll each number. This particular issue with be reduced if the core dice mechanic generated results that approximate a bell curve (the normal distribution). Switching to 3d6, for instance, would dramatically lessen this one problem, though it does create a new set (among them narrowing the range of opponents a party can successfully face, which causes all kinds of balance issues).

The other way to deal with it is to constrain the range. This could be explicit (autohit on 18-20), or by working the underlying math so (for example) fighters always hit about 70% of the time, or skill checks tend to succeed that frequently (not uncommon in other games, as I mentioned earlier). That way, you avoid the hockey stick at the end of the spectrum. But again it's a trade off, because you also lose things like the radically differently feel between low and high level games in old school D&D.

I think the flat d20 range is a feature of D&D, but it's useful to be aware of the consequences.

Yeah, I've considered using 3d6 instead of a d20 before. However, some of this stuff also depends on what type of roll you're making. For most skill rolls having only a +2 bonus from an attribute is OK, cuz you're making an unskilled check. But if that +2 bonus goes into a STR vs STR roll, as Wulfhelm mentioned in a reply to me earlier (quoted below), that +2 bonus will still not be much in a 3d6+Mod system (though, it would suck less). That's where my STR vs STR tangent in the last two posts comes from.

Quote from: Wulfhelm on February 20, 2022, 03:53:02 AM
Quote from: VisionStorm on February 19, 2022, 08:51:32 PMSomewhere between "A +2 bonus could snatch you from the jaws of death!" and "a +2 bonus is completely and utterly irrelevant" the truth lies.
Yes. It lies with "a +2 bonus is mostly irrelevant". Which is what I'm saying. Confronted with a situation where a d20 roll can snatch you(r character, presumably) from the jaws of death, in 90% of all such cases a +2 bonus won't have helped.

This is completely a tangent now, but one problem with this is that a lot of D20-based systems try and sell a +2 bonus as some major difference in character competence; e.g. the proficiency bonus difference between a 1st-level and a 10th-level character in 5E or the difference between an average Str 10 schlub and the Str 15 village strongman in 3.x.

Of course that is just as related to the randomness of D20- or more generally dice-based resolutions as to the specific range, of course. If you say "you need to beat DC 13 to lift the gate", the Str 10 schlub is obviously 80% as likely to do it as the village strongman ("... lift with the legs, Sir Rogar!")
If OTOH you say "you need Str 13+ to lift the gate" or possibly " you need [rolls 1D6+10] Str 13+ to lift the gate", you have a different situation and the problem vanishes. Of course the latter approach, if systematized, the latter approach also has its own randomness problems.

Wulfhelm

Quote from: Pat on February 20, 2022, 02:17:16 PMThe correct answer is a). Just as you demonstrated that you're completely unfamiliar with third edition a couple posts back,
I've played 3rd edition to death, I know you can, if you really try, rig the combat system (you've given up on making your point, whatever it's supposed to be, for anything outside combat, right?) to produce such "only on a 20" scenarios, and I also know that that is just one of several reasons why it's a shit system.

Quoteyou're now demonstrating you're completely unfamiliar with old school D&D, because that's how the game works. At low levels, fighting is very whiffy (I frequently use that exact word to describe the effect).
I am indeed mostly unfamiliar with old school D&D in actual play, because who could stand playing that for long, but what you're saying here just makes my point: It's a random system where you just die a lot. Being "whiffy" and being deadly (which the ratio of damage to hp inevitably means) at the same time means: You die a lot due to random chance.

QuoteThough this isn't a flaw, it's a feature. It makes the game feel very dangerous at low levels, because a single bad roll can make things very desperate. It's an essential part of why low levels are so notoriously deadly,
It doesn't make it desperate, it makes you dead. It just doesn't make your piddly +2 bonuses any more relevant, because you're just gonna die before you ever benefit from them.

Wulfhelm

Quote from: VisionStorm on February 20, 2022, 02:53:33 PM
Yeah, I've considered using 3d6 instead of a d20 before. However, some of this stuff also depends on what type of roll you're making. For most skill rolls having only a +2 bonus from an attribute is OK, cuz you're making an unskilled check. But if that +2 bonus goes into a STR vs STR roll, as Wulfhelm mentioned in a reply to me earlier (quoted below), that +2 bonus will still not be much in a 3d6+Mod system (though, it would suck less). That's where my STR vs STR tangent in the last two posts comes from.

I've crunched some numbers on that myself, and found that "bell curve" distributions, aka multiple dice, are ess relevant than a lower die roll range. In the case I examined, it was 1d10 vs 2d6. A single, "flat" die with a lower range is actually going to make modifiers more relevant than a multi-dice roll with a higher range.

One solution I once suggested to someone who thought (as demonstrated, with good reason) that 5E was too random was: Just use a d10 instead of a d20, and reduce all TNs etc. by 5.

Steven Mitchell

#52
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
It's only on STR vs STR rolls where we get into this issue where STR is the ONLY obvious ability that would fit the roll, and a +2 bonus is supposed to represent "high" STR, which SHOULD be significant for that roll and that type of roll alone*, but barely gives you any advantage, cuz you only get your Attribute bonus alone, but the system is built under the assumption that Attribute bonuses are only an extra you add on top of other stuff (such as a Skill or Combat Bonus) to supplement it. And attribute bonus ranges reflect that assumption. So proper STR vs STR rolls all apart, cuz pure raw attribute bonuses are build on a scale that assumes you always get another modifier and your attribute bonus is just an extra you add on top. Except that a STR vs STR test is a stand alone roll where STR is the only relevant ability and every point of difference should give you a significant edge over someone who doesn't have a bonus, yet it doesn't.

It is not only STR vs STR rolls where this happens, but it is one of the clearest, most obvious cases where it is completely out of whack with reality.  In many of the other circumstances, you can talk yourself into it not being out of whack for a particular test, or even invent the relevant "skill" to sit on top of the attribute, at least in isolation.  It's only when looking at all the skills as a group that the other cases begin to stand out.  It's because a game model can't handle the complexity of how ability works.

Take languages, literacy, and other nuances of communication for example.  Not looking too hard at what is modeled, you can sort of slip by with a Persuade skill or the like, maybe with some GM adjudication for having a language checked.  But in real life, "persuasion" is a heck of a lot more like an arm wrestling contest in modeling terms than it is some of the other skills.  It's very much done on a curve--maybe not as steep as a bell curve, and maybe not exactly as the curve of a d20 + attribute + skill versus a similar construct (instead of static DC), but closer to one of those than what a d20 variance gives.  We just smooth it out in our minds and assume that some people and situations are a lot harder than others, and live with the model mismatch.  It's harder to do that with arm wrestling.  So that is one of the first cases that arises in that discussion.

Swimming is another physical skill that can provoke that.  Some games actually address it:  Instead of swimming in too much armor giving a penalty or the like, they simply say it can't be done.  That constrains the rolls back to a model where they can kind of fit.  Arguably, that's an answer for some STR vs STR tests, too.  Beyond a certain difference, the higher STR simply wins.

For any game, the game model must make compromises to keep the game playable.  It's inescapable.  Thus the art is in zeroing in on the part of the subject matter most relevant to the game and making the model a good fit there, and then isolating the outliers with whatever means are necessary.  This is why generic universal systems aren't.

Pat

Quote from: Wulfhelm on February 20, 2022, 05:11:59 PM
Quote from: Pat on February 20, 2022, 02:17:16 PMThe correct answer is a). Just as you demonstrated that you're completely unfamiliar with third edition a couple posts back,
I've played 3rd edition to death, I know you can, if you really try, rig the combat system (you've given up on making your point, whatever it's supposed to be, for anything outside combat, right?) to produce such "only on a 20" scenarios, and I also know that that is just one of several reasons why it's a shit system.
You don't even have to try. It just sort of happens at mid to high levels, and by epic levels you have to fight against the system to stop it.

Quote from: Wulfhelm on February 20, 2022, 05:11:59 PM
Quoteyou're now demonstrating you're completely unfamiliar with old school D&D, because that's how the game works. At low levels, fighting is very whiffy (I frequently use that exact word to describe the effect).
I am indeed mostly unfamiliar with old school D&D in actual play, because who could stand playing that for long, but what you're saying here just makes my point: It's a random system where you just die a lot. Being "whiffy" and being deadly (which the ratio of damage to hp inevitably means) at the same time means: You die a lot due to random chance.

QuoteThough this isn't a flaw, it's a feature. It makes the game feel very dangerous at low levels, because a single bad roll can make things very desperate. It's an essential part of why low levels are so notoriously deadly,
It doesn't make it desperate, it makes you dead. It just doesn't make your piddly +2 bonuses any more relevant, because you're just gonna die before you ever benefit from them.
Oh, you're one of those. The kind of people who think your personal preferences and limited experiences are a universal truth and anybody who has difference experiences or likes different things is objectively wrong.


Lunamancer

Quote from: Wulfhelm on February 19, 2022, 01:25:29 PM
Where did you see a question in his posting? (<- That was a question.)

Well, the question in the subject line is "Why quantify the average?" and that renders literally everything you're saying moot. But you were responding to someone specifically, and I already pointed out what the issue was and how you were ignoring it. And you full well know that. Well enough to have clipped that part out when quoting me, anyway.

QuoteThen the actual save-or-die roll becomes so rare that a +2 bonus on it is not likely to ever matter over the course of an entire campaign.

Yes. That was kind of my point. You're hopped up on just one method of analysis that's causing you to miss the big picture. You're erroneously assuming the question of how much the +2 adjustment matters in a d20 system hinges on the one isolated save itself, and not the full context surrounding it.

It's not like we just show up every week and the DM is like, "Okay, save or die, everybody. You need a 4 or better. Hope you've got a +2 adjustment." There's a whole sequence of events that lead up to the save where the +2 adjustment will likely apply to multiple rolls. The chances of going from point A to point B without that +2 adjustment making a difference on any of the rolls becomes vanishingly small. I cut your 90% down to around half in just one single round of combat.

QuoteIt is actually very simple, and remains so: Such a small bonus is irrelevant for 90% of all single rolls. To become statistically relevant, it needs to come into play very often. If a severe in-game consequence (e.g. character death) is tied to it, and the roll comes into play very often, then even in extreme cases, but most definitely in more typical ones, said severe consequence is likely to happen sooner rather than later.

Okay. So I follow where you've fallen back from "+2 doesn't matter 90% of the time" to "+2 doesn't matter 90% of the time for any single die roll." But when did you smuggle in the assumption that if we analyze multiple dice rolls, all these dice rolls have to be equally, homogeneously, life-or-death checks? In the snake example I used, the additional rolls were opportunities to head off ever having to face the life-or-death roll. This doesn't conform to your assumption, and as a consequence the conclusion is the exact opposite of what you're saying. The more additional rolls where the +2 has an opportunity to matter, the LESS likely it is to result in ultimate death.
That's my two cents anyway. Carry on, crawler.

Tu ne cede malis sed contra audentior ito.

VisionStorm

Quote from: Wulfhelm on February 20, 2022, 05:18:26 PM
Quote from: VisionStorm on February 20, 2022, 02:53:33 PM
Yeah, I've considered using 3d6 instead of a d20 before. However, some of this stuff also depends on what type of roll you're making. For most skill rolls having only a +2 bonus from an attribute is OK, cuz you're making an unskilled check. But if that +2 bonus goes into a STR vs STR roll, as Wulfhelm mentioned in a reply to me earlier (quoted below), that +2 bonus will still not be much in a 3d6+Mod system (though, it would suck less). That's where my STR vs STR tangent in the last two posts comes from.

I've crunched some numbers on that myself, and found that "bell curve" distributions, aka multiple dice, are ess relevant than a lower die roll range. In the case I examined, it was 1d10 vs 2d6. A single, "flat" die with a lower range is actually going to make modifiers more relevant than a multi-dice roll with a higher range.

One solution I once suggested to someone who thought (as demonstrated, with good reason) that 5E was too random was: Just use a d10 instead of a d20, and reduce all TNs etc. by 5.

That sounds like an interesting approach, though, I wonder if it would take things too far in the opposite direction and make modifiers too significant. If was going to try something like that, I'd probably go 2d6 instead of 1d10 and worry about the math later, just cuz I like d6s more than d10s for esthetic reasons. ;D

Quote from: Steven Mitchell on February 20, 2022, 05:30:10 PM
Quote from: VisionStorm on February 20, 2022, 09:18:35 AM
It's only on STR vs STR rolls where we get into this issue where STR is the ONLY obvious ability that would fit the roll, and a +2 bonus is supposed to represent "high" STR, which SHOULD be significant for that roll and that type of roll alone*, but barely gives you any advantage, cuz you only get your Attribute bonus alone, but the system is built under the assumption that Attribute bonuses are only an extra you add on top of other stuff (such as a Skill or Combat Bonus) to supplement it. And attribute bonus ranges reflect that assumption. So proper STR vs STR rolls all apart, cuz pure raw attribute bonuses are build on a scale that assumes you always get another modifier and your attribute bonus is just an extra you add on top. Except that a STR vs STR test is a stand alone roll where STR is the only relevant ability and every point of difference should give you a significant edge over someone who doesn't have a bonus, yet it doesn't.

It is not only STR vs STR rolls where this happens, but it is one of the clearest, most obvious cases where it is completely out of whack with reality.  In many of the other circumstances, you can talk yourself into it not being out of whack for a particular test, or even invent the relevant "skill" to sit on top of the attribute, at least in isolation.  It's only when looking at all the skills as a group that the other cases begin to stand out.  It's because a game model can't handle the complexity of how ability works.

Take languages, literacy, and other nuances of communication for example.  Not looking too hard at what is modeled, you can sort of slip by with a Persuade skill or the like, maybe with some GM adjudication for having a language checked.  But in real life, "persuasion" is a heck of a lot more like an arm wrestling contest in modeling terms than it is some of the other skills.  It's very much done on a curve--maybe not as steep as a bell curve, and maybe not exactly as the curve of a d20 + attribute + skill versus a similar construct (instead of static DC), but closer to one of those than what a d20 variance gives.  We just smooth it out in our minds and assume that some people and situations are a lot harder than others, and live with the model mismatch.  It's harder to do that with arm wrestling.  So that is one of the first cases that arises in that discussion.

Swimming is another physical skill that can provoke that.  Some games actually address it:  Instead of swimming in too much armor giving a penalty or the like, they simply say it can't be done.  That constrains the rolls back to a model where they can kind of fit.  Arguably, that's an answer for some STR vs STR tests, too.  Beyond a certain difference, the higher STR simply wins.

For any game, the game model must make compromises to keep the game playable.  It's inescapable.  Thus the art is in zeroing in on the part of the subject matter most relevant to the game and making the model a good fit there, and then isolating the outliers with whatever means are necessary.  This is why generic universal systems aren't.

Yeah, I will concede that skill rolls are not always perfect, but they tend to be close enough in the vast majority of situations for purposes of "It's a game!" to work with them, even if I have to squint sometimes to see it. Cuz as you mentioned, game rules are models where compromises have to be made, and they're inescapable. So I don't worry about ability rolls being perfect, but just close enough to model what we're trying to illustrate in the game. I can always add exceptions later, or guidelines for when a skill check can't be made or isn't necessary, etc.

Like for example, if characters know a language I don't force them to make a skill check unless communicating complex concepts or they're speaking with someone with a different dialect or something. Just having X total modifier in the language is enough to skip the roll, unless something unusual happens.