SPECIAL NOTICE
Malicious code was found on the site, which has been removed, but would have been able to access files and the database, revealing email addresses, posts, and encoded passwords (which would need to be decoded). However, there is no direct evidence that any such activity occurred. REGARDLESS, BE SURE TO CHANGE YOUR PASSWORDS. And as is good practice, remember to never use the same password on more than one site. While performing housekeeping, we also decided to upgrade the forums.
This is a site for discussing roleplaying games. Have fun doing so, but there is one major rule: do not discuss political issues that aren't directly and uniquely related to the subject of the thread and about gaming. While this site is dedicated to free speech, the following will not be tolerated: devolving a thread into unrelated political discussion, sockpuppeting (using multiple and/or bogus accounts), disrupting topics without contributing to them, and posting images that could get someone fired in the workplace (an external link is OK, but clearly mark it as Not Safe For Work, or NSFW). If you receive a warning, please take it seriously and either move on to another topic or steer the discussion back to its original RPG-related theme.

Is "roll under %" a disdained mechanic?

Started by Shipyard Locked, February 14, 2014, 12:01:59 PM

Previous topic - Next topic

Phillip

I personally prefer a method that clearly states n/d chance of event, since the odds are what interest me and I always end up converting something like 14+ on d20 into that mentally anyway!

I've got a friend who is demented enough to need occasionally to ask whether 30% means a roll over 30, but he does the same thing the other way with AD&D saving rolls, and most gamers don't have that problem.
And we are here as on a darkling plain  ~ Swept with confused alarms of struggle and flight, ~ Where ignorant armies clash by night.

Phillip

Quote from: Warthur;731211Either way, with COC I think they wanted to reflect the distinction where in the modern era most of your professional skills are a matter of education and training, and there are a lot of pursuits you can't simply expect to pick up and bluff you way through without any prior training, hence your skills coming from your Education and Intelligence pools rather than being directly derived from stats.
Nah, what they mainly wanted was a more streamlined rules set than RuneQuest and Stormbringer (which derived skill bonuses and penalties from tailored stat-based formulas).

It's the same motive as dumping hit locations, for instance; it's not like there's some special 'modern' physics that makes a head shot as deadly as one to the toe.
And we are here as on a darkling plain  ~ Swept with confused alarms of struggle and flight, ~ Where ignorant armies clash by night.

One Horse Town

Quote from: LordVreeg;731561I always prefer roll under %, actually.  I use it in 90% of the games I run and design.

I see what you did there. :D

Omega

Quote from: Emperor Norton;731572I don't mind roll under, or even linear progression of success, but for some reason I'm just not fond of the d% dice. Its probably just my own personal neuroses. It won't stop me from playing a game, but for some reason I just don't like them.

Some players have a hard time reading them. I used to have a bit of a mental hurdle reading percentile dice and then it clicked and I discovered I'd been reading them right all along. It just felt off for some reason.

LordVreeg

Quote from: One Horse Town;731632I see what you did there. :D

well...i thought there was a chance I'd get away with it....
Currently running 1 live groups and two online group in my 30+ year old campaign setting.  
http://celtricia.pbworks.com/
Setting of the Year, 08 Campaign Builders Guild awards.
\'Orbis non sufficit\'

My current Collegium Arcana online game, a test for any ruleset.

Elfdart

Interesting thread.

The only issue I can think of regarding d% is that way back when, they didn't make dedicated percentile dice (they had just started making d10s -most gamers were using d20s where half the numerals were one color and half were another: "Yellow is high; red is low"). So rolling for percentages on a regular basis could be grating: either two dice were rolled and the person rolling had to call which was high and which die was the 10s, or a single die was thrown twice.  If you were trying to play at school or anywhere else and there wasn't much space or time to spare, this could be a headache. As a result, we often rounded to the nearest 5% and rolled a d20.

But like I said, that was a long time ago in a galaxy far, far away.
Jesus Fucking Christ, is this guy honestly that goddamned stupid? He can\'t understand the plot of a Star Wars film? We\'re not talking about "Rashomon" here, for fuck\'s sake. The plot is as linear as they come. If anything, the film tries too hard to fill in all the gaps. This guy must be a flaming retard.  --Mike Wong on Red Letter Moron\'s review of The Phantom Menace

Adric

The only problem I have with D%, and it's the same issue I have with a D20, is that it has a large range of possible results, with a flat probability line. This can cause the need to chase a large modifier or low target number through munchkining by some players in some systems. I personally prefer a probability curve achieved by 2DX, where a middle score is the average.

I do appreciate the elegance of your percentage for a certain roll under or roll over result directly maps to the percentage of the dice.

deleriad

#142
Quote from: Adric;731696The only problem I have with D%, and it's the same issue I have with a D20, is that it has a large range of possible results, with a flat probability line. This can cause the need to chase a large modifier or low target number through munchkining by some players in some systems. I personally prefer a probability curve achieved by 2DX, where a middle score is the average.

This is something that really puzzles me. Assuming flat rate modifiers as opposed to proportional ones (e.g. RQ6, CoC7) then the impact of the modifier is the same regardless of your skill except at the margins. E.g. If your modifier brings the success chance down to 5% (or up to 95%) then further modifiers are (largely) irrelevant. Thus "optimising" (aka being a munchkin by stacking as many modifiers as you can get away with) is simple and transparent.

Using a bell curve system where the impact of the modifier has more effect the closer to average you are encourages munchkins to optimise the effect of each modifier. Because the impact of modifiers are variable there is more for a munchkin to play with.

The other thing that is important here is the communication of risk. One of the things statisticians are told never to do is to present effects as ratios. For example, if eating sausages once a day for a year "trebles your chance of bowel cancer" that sounds terrifying. On the other hand if it increases the incidence bowel cancer from 0.02% to 0.06% suddenly it sounds insignificant. (Those stats are made up.)

So when someone says that a modifier "quadruples your chance of failure" if your skill is 100% but only slightly increases it if your skill is 40% that sounds completely wrong. However in BRP if your skill is 100% you have a 5% chance of failure (if you roll). A -20% modifier gives you a 20% chance of failure. If your skill is 40% then your chance of failure increases from 60% to 80% ("a 33% increase..."). Which is all to say that proportional increases are a very bad way of explaining these things.

Personally, in an rpg, I prefer "linear" skill ratings whether d100 or d20 with linear modifiers. I also prefer modifiers to be big and rare and (on the whole) non-stacking. When rolling for a skill during a game I would rather have more chance of an extreme result because these are extreme moments.

One reason I don't like proportional modifiers (e.g. skill/2 or skill/10) is that they make some things feel impossible when they shouldn't be. E.g. bloke tightrope walks between hot air balloons. That sounds like something that's as hard as hard can be without being impossible. If you rate that at skill/10 then you need 1000% in tight-rope walking. If on the other hand you rate it at minus 80% you *only* need a skill of 180%. This helps keep skill inflation under check. Of course this means that in a game session almost anyone trying to tightrope walk has in theory a 5% chance of success. Great, that's the kind of high stakes unexpected victory which can overthrow any kind of railroad.

So give me linear scales, big linear modifiers and let the dice provide the drama. Surely that's the unpredictability which makes rpgs so much fun.

LordVreeg

Quote from: deleriad;731699This is something that really puzzles me. Assuming flat rate modifiers as opposed to proportional ones (e.g. RQ6, CoC7) then the impact of the modifier is the same regardless of your skill except at the margins. E.g. If your modifier brings the success chance down to 5% (or up to 95%) then further modifiers are (largely) irrelevant. Thus "optimising" (aka being a munchkin by stacking as many modifiers as you can get away with) is simple and transparent.

Using a bell curve system where the impact of the modifier has more effect the closer to average you are encourages munchkins to optimise the effect of each modifier. Because the impact of modifiers are variable there is more for a munchkin to play with.

The other thing that is important here is the communication of risk. One of the things statisticians are told never to do is to present effects as ratios. For example, if eating sausages once a day for a year "trebles your chance of bowel cancer" that sounds terrifying. On the other hand if it increases the incidence bowel cancer from 0.02% to 0.06% suddenly it sounds insignificant. (Those stats are made up.)

So when someone says that a modifier "quadruples your chance of failure" if your skill is 100% but only slightly increases it if your skill is 40% that sounds completely wrong. However in BRP if your skill is 100% you have a 5% chance of failure (if you roll). A -20% modifier gives you a 20% chance of failure. If your skill is 40% then your chance of failure increases from 60% to 80% ("a 33% increase..."). Which is all to say that proportional increases are a very bad way of explaining these things.

Personally, in an rpg, I prefer "linear" skill ratings whether d100 or d20 with linear modifiers. I also prefer modifiers to be big and rare and (on the whole) non-stacking. When rolling for a skill during a game I would rather have more chance of an extreme result because these are extreme moments.

One reason I don't like proportional modifiers (e.g. skill/2 or skill/10) is that they make some things feel impossible when they shouldn't be. E.g. bloke tightrope walks between hot air balloons. That sounds like something that's as hard as hard can be without being impossible. If you rate that at skill/10 then you need 1000% in tight-rope walking. If on the other hand you rate it at minus 80% you *only* need a skill of 180%. This helps keep skill inflation under check. Of course this means that in a game session almost anyone trying to tightrope walk has in theory a 5% chance of success. Great, that's the kind of high stakes unexpected victory which can overthrow any kind of railroad.

So give me linear scales, big linear modifiers and let the dice provide the drama. Surely that's the unpredictability which makes rpgs so much fun.

I as well, and then it also allows for longer duration games with more granular changes, so the PCS feel improvement, but without becoming impossibly powerful in a year of game time.
Currently running 1 live groups and two online group in my 30+ year old campaign setting.  
http://celtricia.pbworks.com/
Setting of the Year, 08 Campaign Builders Guild awards.
\'Orbis non sufficit\'

My current Collegium Arcana online game, a test for any ruleset.

Adric

Quote from: deleriad;731699This is something that really puzzles me. Assuming flat rate modifiers as opposed to proportional ones (e.g. RQ6, CoC7) then the impact of the modifier is the same regardless of your skill except at the margins. E.g. If your modifier brings the success chance down to 5% (or up to 95%) then further modifiers are (largely) irrelevant. Thus "optimising" (aka being a munchkin by stacking as many modifiers as you can get away with) is simple and transparent.

Using a bell curve system where the impact of the modifier has more effect the closer to average you are encourages munchkins to optimise the effect of each modifier. Because the impact of modifiers are variable there is more for a munchkin to play with.

The other thing that is important here is the communication of risk. One of the things statisticians are told never to do is to present effects as ratios. For example, if eating sausages once a day for a year "trebles your chance of bowel cancer" that sounds terrifying. On the other hand if it increases the incidence bowel cancer from 0.02% to 0.06% suddenly it sounds insignificant. (Those stats are made up.)

So when someone says that a modifier "quadruples your chance of failure" if your skill is 100% but only slightly increases it if your skill is 40% that sounds completely wrong. However in BRP if your skill is 100% you have a 5% chance of failure (if you roll). A -20% modifier gives you a 20% chance of failure. If your skill is 40% then your chance of failure increases from 60% to 80% ("a 33% increase..."). Which is all to say that proportional increases are a very bad way of explaining these things.

Personally, in an rpg, I prefer "linear" skill ratings whether d100 or d20 with linear modifiers. I also prefer modifiers to be big and rare and (on the whole) non-stacking. When rolling for a skill during a game I would rather have more chance of an extreme result because these are extreme moments.

One reason I don't like proportional modifiers (e.g. skill/2 or skill/10) is that they make some things feel impossible when they shouldn't be. E.g. bloke tightrope walks between hot air balloons. That sounds like something that's as hard as hard can be without being impossible. If you rate that at skill/10 then you need 1000% in tight-rope walking. If on the other hand you rate it at minus 80% you *only* need a skill of 180%. This helps keep skill inflation under check. Of course this means that in a game session almost anyone trying to tightrope walk has in theory a 5% chance of success. Great, that's the kind of high stakes unexpected victory which can overthrow any kind of railroad.

So give me linear scales, big linear modifiers and let the dice provide the drama. Surely that's the unpredictability which makes rpgs so much fun.

I'm just not convinced that increasing chances of success by such small increments is necessary. On a d20, each number has a flat 5% chance of occurring. That means that the smallest amount you can improve boosts your chance by 5%.

For D%, the smallest amount you can improve is 1%. If skills regularly increase by more than 1% in a given system, why track it at such fine detail? Just round it off to the nearest 5% and use D20 or the nearest 10% and use D10.

Another problem with a pass/fail system that uses d% is that 9 times out of 10, the second die won't matter. If the target is say, 55%, the 10's die needs to be a 5 for there to be any tension on the 1's die. If the target number is a flat multiple of 10, and there are no modifiers, the second die never matters at all.

Brad

Quote from: Adric;731714Another problem with a pass/fail system that uses d% is that 9 times out of 10, the second die won't matter. If the target is say, 55%, the 10's die needs to be a 5 for there to be any tension on the 1's die. If the target number is a flat multiple of 10, and there are no modifiers, the second die never matters at all.

This sounds like a problem with not understanding how statistics work vs. an actual issue with uniform probability distribution. A 47% chance to do something in BRP means you literally have a 47% chance of succeeding when you roll d100. I don't think you're wrong by saying lack of "tension" with the ones-die is problematic, but it just shows that, fundamentally, people really don't understand numbers that well.
It takes considerable knowledge just to realize the extent of your own ignorance.

Justin Alexander

Quote from: Herr Arnulfe;731380I'm still curious to know at which point along the proficiency scale a bell curve is most desirable, and how that translates to "feel of the game" at the table.

What a bell curve does is create a consistency in outcome while still allowing for a wide range of potential outcomes. This results in a reduced "swinginess" in common outcomes, but doesn't create a "claustrophobic" environment where characters are frequently either guaranteed success or failure.

One way to think of this is stop thinking of a character's "skill" as being a specific number to which you add a die roll. Instead, think of the character's skill as being the range of potential outcomes they can experience.

Say that you had a character with skill 1d6+20: Their range of skill produces results from 21-26. That means for any difficulty rated 21 or lower, they automatically succeed. And for any difficulty rated 27 or higher, they're guaranteed to fail.

Now, take a character with skill 1d20+13: Their range of skill produces results from 14-33. Their performance is still centered on the same range as the 1d6+20 mechanic, but you can see that the range of outcomes is much larger and the results are going to be a lot more "swingy" (they're going to fail on tasks that the more reliable die roll mechanic would allow them to automatically succeed at and they're going to succeed at tasks that the more reliable die roll mechanic would make impossible for them).

Now, take a character with skill 3d6+13: Their range of skill produces results from 16-31. This is still centered on the same range, but 67% of their results are going to fall into the 21-26 range produced by the 1d6+20 mechanic (whereas only 30% of the 1d20+13 results did).

You can very clearly see that the bell curve is delivering consistency akin to a smaller die range, but with nearly the full range of potential outcomes you see on the large die.

The other thing this approach helps to make clear is that modifiers have a consistent effect on the range of potential outcomes: If you're performing under favorable circumstances that grant a +2 bonus to your check, the ranges shift to 23-28, 16-35, and 18-33.

(And I would argue that focusing on one particular difficulty within that range and analyzing what happens to it under a certain modifier is deceptive unless you approach it with a proper mindset. Yes, it's true: A task that's sitting right in the middle of my "core range" is going to be more affected by favorable circumstances than tasks that are huge longshots. But when we put it like that, it makes sense: That core range represents the cusp between what I can routinely achieve and what I'm incredibly unlikely to achieve. It's going to be tasks sitting on that cusp (and not radically out of my normal skill level) that are going to be most affected by circumstance.)
Note: this sig cut for personal slander and harassment by a lying tool who has been engaging in stalking me all over social media with filthy lies - RPGPundit

Bill

Quote from: 3rik;731257So it's basically lowest roll wins, with success levels added. Not my preference but I guess it works, though I think it may fail to take into account the difference in skill level at certain values. I prefer blackjack or margin of success because it automatically benefits the player with the higher skill level.

Low rolls win, with success levels 'overriding' that? Not sure how to best describe it.

The benefit under my system for the higher skilled person is an increased chance to 'override' the lower skilled person.

A person with a 180 skill would have override values of 90<, and <18.
Rolling against someone with 40 skill who has override values of 20< and 4<.

Bill

Quote from: Herr Arnulfe;731261Highest roll that's still below the target number wins (e.g. RQ6).

That looks mechanically sound but ughhhh at low roll good but high roll also kinda good.....:)

Bill

Quote from: Snowman0147;731293Thus the well known and fable truth is told.  No really there is no cure for stupid.

This. Why would anyone do that? I actually saw a gm make people roll climb percent in COC once for stairs. Duh!!!!!!!!