SPECIAL NOTICE
Malicious code was found on the site, which has been removed, but would have been able to access files and the database, revealing email addresses, posts, and encoded passwords (which would need to be decoded). However, there is no direct evidence that any such activity occurred. REGARDLESS, BE SURE TO CHANGE YOUR PASSWORDS. And as is good practice, remember to never use the same password on more than one site. While performing housekeeping, we also decided to upgrade the forums.
This is a site for discussing roleplaying games. Have fun doing so, but there is one major rule: do not discuss political issues that aren't directly and uniquely related to the subject of the thread and about gaming. While this site is dedicated to free speech, the following will not be tolerated: devolving a thread into unrelated political discussion, sockpuppeting (using multiple and/or bogus accounts), disrupting topics without contributing to them, and posting images that could get someone fired in the workplace (an external link is OK, but clearly mark it as Not Safe For Work, or NSFW). If you receive a warning, please take it seriously and either move on to another topic or steer the discussion back to its original RPG-related theme.

Is "roll under %" a disdained mechanic?

Started by Shipyard Locked, February 14, 2014, 12:01:59 PM

Previous topic - Next topic

Herr Arnulfe

Quote from: Justin Alexander;731798You can very clearly see that the bell curve is delivering consistency akin to a smaller die range, but with nearly the full range of potential outcomes you see on the large die.
So if I'm following your comparative hypotheticals correctly, it sounds like you prefer a wide range of possible outcomes, but with the results clustered more towards the center? How does this improve the "feel" of a game in your opinion?

Quote from: Justin Alexander;731798(And I would argue that focusing on one particular difficulty within that range and analyzing what happens to it under a certain modifier is deceptive unless you approach it with a proper mindset. Yes, it's true: A task that's sitting right in the middle of my "core range" is going to be more affected by favorable circumstances than tasks that are huge longshots. But when we put it like that, it makes sense: That core range represents the cusp between what I can routinely achieve and what I'm incredibly unlikely to achieve. It's going to be tasks sitting on that cusp (and not radically out of my normal skill level) that are going to be most affected by circumstance.)
That's certainly one way of viewing the world and how physics works. At the gaming table I think it would be hard to convince many players that their +5% modifier for using a tripod is actually better than the +12% modifier gained by their average-skilled colleague, because it represents a higher percentage of their base value. Or that a highly skilled sniper shouldn't bother with a tripod because it only gives him a 1% boost anyway.
 

Herr Arnulfe

Quote from: Bill;731828That looks mechanically sound but ughhhh at low roll good but high roll also kinda good.....:)

"Blackjack" for % opposed tests is a little counterintuitive, and you sometimes have to re-roll contests when both sides fail. But it gets the job done and removes any need for addition/subtraction, so I can see why some designers use it. Keep in mind, RQ6 uses the dreaded "fractional difficulty modifiers", so eliminating additional math from opposed tests was probably a good idea.
 

Bill

Quote from: Herr Arnulfe;731831"Blackjack" for % opposed tests is a little counterintuitive, and you sometimes have to re-roll contests when both sides fail. But it gets the job done and removes any need for addition/subtraction, so I can see why some designers use it. Keep in mind, RQ6 uses the dreaded "fractional difficulty modifiers", so eliminating additional math from opposed tests was probably a good idea.

I have not played RQ6.

Is 'fractional difficulty mods' the same as what I do as a house rule for Elric and COC: Checking if you made the roll by 1/2 or 1/10th?

The way I do that you don't need any math, just note on the charcater sheet skill 80-40-8   for half and tenth.

Regardless can't you just put it on the character sheet?

Herr Arnulfe

Quote from: Bill;731832Is 'fractional difficulty mods' the same as what I do as a house rule for Elric and COC: Checking if you made the roll by 1/2 or 1/10th?
For example, if you're rolling a challenging task it's either 75%, 50% or 10% of your base skill IIRC. I might have the exact percentages wrong, just going from memory.
 

deadDMwalking

Generally speaking, I find that people that advocate d% roll under do it that way by way of 'tradition'.  It's a pretty well-established system, but personally, doesn't do much for me.  

Personally, I think making everything a static DC of 100 and adding your skill rank works just fine.  

Example: Your skill is 40.  On d% roll under, you succeed with a roll of 1-40 (assuming you must roll your skill or lower).  If you take d% and add your modifier against a DC of 100 you succeed on a roll of 60 or better.  

Ultimately, the 'roll under' is fine, unless/until you care about degree of success.  If your attribute is a 62 and your roll is a 39, a lot of people have trouble quickly determinig the difference (23).  On a roll over system you roll a 61 (61+62=123).  Subtracting 100 is really easy so your degrees of success are easier to calcuate (dropping the 100s place doesn't even require actual subtraction).  

Since most people are better at adding than subtracting double-digit numbers, the system works really well.  It also makes it very easy to see 'how good someone is at something' since +40% is the same as 40% in a roll under system - you can tell how skilled someone is just as intuitively, but can resolve results just a little faster.
When I say objectively, I mean \'subjectively\'.  When I say literally, I mean \'figuratively\'.  
And when I say that you are a horse\'s ass, I mean that the objective truth is that you are a literal horse\'s ass.

There is nothing so useless as doing efficiently that which should not be done at all. - Peter Drucker

Herr Arnulfe

Quote from: deadDMwalking;731838Ultimately, the 'roll under' is fine, unless/until you care about degree of success.  If your attribute is a 62 and your roll is a 39, a lot of people have trouble quickly determinig the difference (23).  On a roll over system you roll a 61 (61+62=123).  Subtracting 100 is really easy so your degrees of success are easier to calcuate (dropping the 100s place doesn't even require actual subtraction).
Agreed, Degrees of Success can be annoying in increments of 10 or less. Roll-under % works better with larger DoS increments of 20 or 30, which you can just eyeball for the most part.
 

Bill

Quote from: Herr Arnulfe;731833For example, if you're rolling a challenging task it's either 75%, 50% or 10% of your base skill IIRC. I might have the exact percentages wrong, just going from memory.

That's the system I came up with from playing Elric and COC. I never felt the need to have a break point between 50 and 100 though. So no 75 percent.

I think Elric and COC used to use 20 percent, natural 5, natural 1-2 as I recall.
I tweaked it to 50percent, 10percent, 1-2natural.

Herr Arnulfe

Quote from: Bill;731852That's the system I came up with from playing Elric and COC. I never felt the need to have a break point between 50 and 100 though. So no 75 percent.

I think Elric and COC used to use 20 percent, natural 5, natural 1-2 as I recall.
I tweaked it to 50percent, 10percent, 1-2natural.
Yeah it's really just the 75% calculation that can require some brainpower, but jumping straight to 50% might be too big a leap. I think straight-up -10% stacking penalties are good enough for me, although the occasional arithmetic doesn't detract from my enjoyment of the game (as long as the GM doesn't make every test a skill x 75% challenge. :))
 

arminius

#158
Quote from: Herr Arnulfe;731843Agreed, Degrees of Success can be annoying in increments of 10 or less. Roll-under % works better with larger DoS increments of 20 or 30, which you can just eyeball for the most part.

This just gets back to the schizoid nature of generating performance outputs in the BRP family. I don't think "degree of success" was in the original system in the sense of succeeding or failing by a linear amount. (The only think like that IIRC was the Defense skill in RQ I/II. But it's a really marginal case.) All that mattered was whether you got a crit, special/impale, regular success, or fumble. The linear degree of success was tacked on later, mainly as a tie-breaker, but I don't think it's truly needed in most cases.

That said, I think what Justin wrote about bell curves basically boils down to what I said about Fudge resolution. I do think that to the extent you can measure it, performance tends to be distributed normally around the mean for an individual.

Improvement in raw ability over time/effort though I think is more likely to follow a curve of "diminishing returns"--steep on the left, shallower on the right.

This upshot if you agree with these premises is that skill + bell-curve randomizer vs. difficulty number (with linear situational mods) is a good resolution system, while advancement costs should be adjusted to make the performance curve have the right shape measured against time/effort.

I seem to remember doing some graphs that showed GURPS does just this, but I don't know if I ever posted them. Basically you'd plot chance of success against character points, instead of plotting it against skill level.

Justin Alexander

Quote from: Herr Arnulfe;731830So if I'm following your comparative hypotheticals correctly, it sounds like you prefer a wide range of possible outcomes, but with the results clustered more towards the center? How does this improve the "feel" of a game in your opinion?

I don't actually care. I'm describing what the mechanic does.

Quote from: Herr Arnulfe;731830That's certainly one way of viewing the world and how physics works. At the gaming table I think it would be hard to convince many players that their +5% modifier for using a tripod is actually better than the +12% modifier gained by their average-skilled colleague, because it represents a higher percentage of their base value. Or that a highly skilled sniper shouldn't bother with a tripod because it only gives him a 1% boost anyway.

First: I never said anything about "better".

Second: I have literally never played at a table where people are calculating the exact odds of success and then comparing those odds before and after applying various modifiers. It's a non-issue except for armchair wanking.

Even as a hypothetical exercise, it's pretty ridiculous: Say that you're applying an ability modifier, a skill modifier, an equipment modifier, and a situational modifier to a 3d6 die roll vs. DC 10. How does that conversation look, exactly? "Okay, if I apply the situational modifier first the percentage change in probability of success is huge, but then the percentage change in probability of success from my skill is tinier than it would have been if the situation wasn't so favorable. But wait! If I apply my skill modifier first, then my skill is having a huge effect on the percentage change in the probability of success and the situational modifier is having only a tiny effect! Whoa!"

This sort of thinking just betrays a really fundamental incomprehension of the math.

Let me put it another way: You've got two coupons, both giving you $0.50 off on a $2 item. The first coupon reduces the price of the item by 25% (from $2 to $1.50). The second coupon reduces the price of the item by 33% (from $1.50 to $1). HOW IS THIS POSSIBLE? WHY IS THE VALUE OF THE SECOND COUPON SO MUCH LARGER THAN THE VALUE OF THE FIRST COUPON?
Note: this sig cut for personal slander and harassment by a lying tool who has been engaging in stalking me all over social media with filthy lies - RPGPundit

Herr Arnulfe

Quote from: Arminius;731859This just gets back to the schizoid nature of generating performance outputs in the BRP family. I don't think "degree of success" was in the original system in the sense of succeeding or failing by a linear amount. (The only think like that IIRC was the Defense skill in RQ I/II. But it's a really marginal case.) All that mattered was whether you got a crit, special/impale, regular success, or fumble. The linear degree of success was tacked on later, mainly as a tie-breaker, but I don't think it's truly needed in most cases.
Maybe not in BRP, but in WFRP 1e degrees of failure and success were part of the standard skill test resolution mechanics. In WFRP 2e and the 40KRP games they switched DoS/DoF from increments of 20 to increments of 10, which can make a big difference in terms of mental processing time.

Quote from: Arminius;731859That said, I think what Justin wrote about bell curves basically boils down to what I said about Fudge resolution. I do think that to the extent you can measure it, performance tends to be distributed normally around the mean for an individual.
If you're not measuring degrees of success or failure, does it really matter where the majority of results are clustered? Aside from the possible desire to have modifiers and skill differentials shrink at the "lips of the bell" (as discussed earlier) the end result is just a binary outcome.
 

ZWEIHÄNDER

Roll Under %
I use the "roll under" method in ZWEIHÄNDER, referencing 2d10 units and ones die for a result of 1 to 100.

Opposed Tests
In cases where Degrees For Success are needed with an opposed Skill Test, the system references the ones die for comparison with a small modifier for Primary Attributes.

Critical Success and Critical Failure
If you roll matching dice on percentiles and succeed your chances, you generate a Critical Success. If you roll matching dice on percentiles and fail your chances, you generate a Critical Failure.

Simple, easy and intuitive!
No thanks.

Herr Arnulfe

Quote from: Justin Alexander;731861I don't actually care. I'm describing what the mechanic does.
OK, I didn't actually need an explanation of what a bell curve does, but thanks anyway for the elaborate demonstration.

Quote from: Justin Alexander;731861Even as a hypothetical exercise, it's pretty ridiculous: Say that you're applying an ability modifier, a skill modifier, an equipment modifier, and a situational modifier to a 3d6 die roll vs. DC 10. How does that conversation look, exactly? "Okay, if I apply the situational modifier first the percentage change in probability of success is huge, but then the percentage change in probability of success from my skill is tinier than it would have been if the situation wasn't so favorable. But wait! If I apply my skill modifier first, then my skill is having a huge effect on the percentage change in the probability of success and the situational modifier is having only a tiny effect! Whoa!"?
I'm not the one arguing in favour of scaling modifiers to base skill (I much prefer flat modifiers). I was just pointing out that the alleged advantage you cited of having fewer results in the "lip" of the bell isn't necessarily an advantage to everyone, nor is it always more realistic. It seemed to me as if you were assuming the superior verisimilitude (or fun factor, or something) of this approach was a given.
 

arminius

Quote from: Herr Arnulfe;731862Maybe not in BRP, but in WFRP 1e degrees of failure
interesting, thanks

QuoteIf you're not measuring degrees of success or failure, does it really matter where the majority of results are clustered? Aside from the possible desire to have modifiers and skill differentials shrink at the "lips of the bell" (as discussed earlier) the end result is just a binary outcome.

Let me put it this way. Skill differentials in terms of scores aren't a valid point of comparison. The thing that you should be comparing with respect to skills is the time/effort/resource/risk that goes into them. If you do this for GURPS or JAGS or Hero--because those are games where it's easy to measure--and count just the points that go into the skill (not underlying attribute), then for a given difficulty you'll find you get lots of bang for the buck on the first few points, then less and less.

But if you hold skill fixed and vary difficulty, you'll find that most of the action is around the middle. I.e. a small change in difficulty that won't matter for someone who is highly skilled or someone who's barely skilled will matter quite a lot for an average person.

Herr Arnulfe

Quote from: Arminius;731876Let me put it this way. Skill differentials in terms of scores aren't a valid point of comparison. The thing that you should be comparing with respect to skills is the time/effort/resource/risk that goes into them. If you do this for GURPS or JAGS or Hero--because those are games where it's easy to measure--and count just the points that go into the skill (not underlying attribute), then for a given difficulty you'll find you get lots of bang for the buck on the first few points, then less and less.
Right, earlier in the thread we talked about "diminishing returns" for XP spent at higher skill levels, and it was pointed out that systems with linear randomizers achieve this through escalating XP costs, instead of diminishing skill improvements.