This is a site for discussing roleplaying games. Have fun doing so, but there is one major rule: do not discuss political issues that aren't directly and uniquely related to the subject of the thread and about gaming. While this site is dedicated to free speech, the following will not be tolerated: devolving a thread into unrelated political discussion, sockpuppeting (using multiple and/or bogus accounts), disrupting topics without contributing to them, and posting images that could get someone fired in the workplace (an external link is OK, but clearly mark it as Not Safe For Work, or NSFW). If you receive a warning, please take it seriously and either move on to another topic or steer the discussion back to its original RPG-related theme.

Are You What You Pretend To Be?

Started by Anon Adderlan, February 24, 2020, 07:23:56 AM

Previous topic - Next topic

amacris

QuoteSearle's assertion that no component of the Chinese Room understands Chinese misses the point that the system as a whole "understands" Chinese. Nevermind that the term "understand" is already nebulous and ill-defined in that context. Hell, the existence of those stupid Amazon machines that do NLP when you ask them to play music or buy trinkets and manage to follow your instructions more than half the time implies that the system "understands" English. I will generally grant that yes, the machine doesn't have the same kind of grasp of the meanings of the words as we do, what with vectorization being weird and the fact that we don't have a machine with all the modalities of a human at the moment, but it is certainly a step in the right direction. Alexa is to strong AI what a squirrel might be to us.

QuoteExactly. Searle suggests that being able to produce answers about a horse wouldn't give you any real understanding of a horse, but that's because it's asking you to picture an AI as a blind person who's been locked in a room all their life. If it was a true AI with human-level knowledge and capacity, then it would have senses and sense memory -- not just words. It would demonstrate understanding of a horse by being able to identify a horse by it's appearance and behavior. Studying the workings of such a true AI, even in Chinese, one could figure out how visual images are processed and thus what a horse looks like. Or how sounds are processed and thus what a horse sounds like. And so forth.

Searle's isn't "missing the point" that the system as a whole "understand" Chinese, and if you think that's a serious rebuttal of Searle's argument you haven't done your homework. All you've done is offer up the discredited "system reply" to Searles, which has been well-rebutted by Searles himself, Clark, Chalmers, Copeland, Harnad... in the 1980s. Virtually no serious philosophers defend that argument anymore.

The reason no one defends it is that it entirely misses the point. What Searles is really pointing out is that what appears to be "understanding" from a third-party perspective has been detached from the subjective first-person experience (qualia) of understanding. In the Chinese Room, a third-party interacting with the room sees evidence that the Chinese Room system understands Chinese. However, there is nothing in the Chinese Room that is experiencing the qualia of understanding. We humans, on the other hand, DO experience the qualia of understanding. Therefore we are not functioning as Chinese Rooms. There is something going on that a purely algorithmic approach to following instructions does not have.

As Searles wrote in 1980: "The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state."

Dimitrios

Quote from: amacris;1123015Searle's isn't "missing the point" that the system as a whole "understand" Chinese, and if you think that's a serious rebuttal of Searle's argument you haven't done your homework. All you've done is offer up the discredited "system reply" to Searles, which has been well-rebutted by Searles himself, Clark, Chalmers, Copeland, Harnad... in the 1980s. Virtually no serious philosophers defend that argument anymore.

The reason no one defends it is that it entirely misses the point. What Searles is really pointing out is that what appears to be "understanding" from a third-party perspective has been detached from the subjective first-person experience (qualia) of understanding. In the Chinese Room, a third-party interacting with the room sees evidence that the Chinese Room system understands Chinese. However, there is nothing in the Chinese Room that is experiencing the qualia of understanding. We humans, on the other hand, DO experience the qualia of understanding. Therefore we are not functioning as Chinese Rooms. There is something going on that a purely algorithmic approach to following instructions does not have.

As Searles wrote in 1980: "The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state."

But would the Chinese room play an evil PC?

We were never into evil player characters, but since our gaming is influenced more by swords and sorcery than by Tolkienesque high fantasy, I suppose we do often play amoral PCs in the same sense that most of the famous classic swords and sorcery heroes were fairly amoral. Fafhrd and the Gray Mouser and Conan never made any bones about the fact that they were straight up thieves when it suited them.

Although maybe for my next character I'll play a Chinese room...

jeff37923

Quote from: Omega;1122995Except that is the problem. There are some that want to, try to, or actually do start blurring the line between character and player.

To do so causes it to stop being a game and start being a mental disorder. Wasn't Rona Jaffe's Mazes and Monsters based on that premise?
"Meh."

GnomeWorks

Quote from: amacris;1123015However, there is nothing in the Chinese Room that is experiencing the qualia of understanding. We humans, on the other hand, DO experience the qualia of understanding. Therefore we are not functioning as Chinese Rooms. There is something going on that a purely algorithmic approach to following instructions does not have.

Prove to me you have subjective conscious experience, and an accompanying mental life that can experience this qualia.
Mechanics should reflect flavor. Always.
Running: Chrono Break: Dragon Heist + Curse of the Crimson Throne (D&D 5e).
Planning: Rappan Athuk (D&D 5e).

Shasarak

When I play RPGs it is more like myself playing a role, Wizard, Fighter etc rather then me pretending to be Gandolf the Grey.

I have not played any Evil characters.  One of my friends had a character that he claimed was Evil but in reality was just him thinking up different "evil" reasons for doing the same thing that the other Good members of the party were doing.  Saving the village?  Well it is just because I want to control the village as part of my Evil empire.  

If I was going to play an Evil campaign then I would need a party that was much more proactive then the standard party.  A successful Evil party needs an overarching goal to aim towards rather then passively reacting to the plot hook of the week.
Who da Drow?  U da drow! - hedgehobbit

There will be poor always,
pathetically struggling,
look at the good things you've got! -  Jesus

Omega

Quote from: Chris24601;1123001I actually had a badge made for the LARPs that the Living Arcanis team ran that said "My PC has Bluff +X, Diplomacy +Y, Intimidate +Z; the player does not."

The team in charge actually said, "Your character's stats don't matter. You have to actually say it yourself."

"Then why the hell are you having us play our characters? What's the point of playing a social character if my Charisma 18 and appropriate skills don't actually do anything in the biggest social-based adventures of the Living campaign that you actually hold to determine how the political events of the next season's modules will unfold?"

If you want to make the players actually act everything out and solve actual puzzles themselves, don't use a system that gives stats to their mental/social abilities.

Yeah seen a few that do that, which makes having stats pointless. At the very least the stats should give a bonus to success. But as usual, varies massively from one game to the next. Some pretty much have no system at all. Others are RPGs on legs and some sort of randomizer system is used for some actions. Similarly it bugs me when a LARP has a system for armour. But then makes you wear the real thing to get any points. Whats the point in an artificial system that only works if you have the real object rather than a prop?

I suspect its because these are usually not actual game designers and they are just kit-bashing stuff willy nilly without really considering the actual functionality. Looks good on paper. Not in practice.

Cthulhu LIVE just has the EDU, DEX, CON and POW stats, plus Magic points, Wound points Luck points and Sanity. Dex covering things like skilled fine manipulation of objects, sleight of hand, etc. The rest is on the players.

jhkim

Extending a little more the off-topic branch,

Quote from: jhkimExactly. Searle suggests that being able to produce answers about a horse wouldn't give you any real understanding of a horse, but that's because it's asking you to picture an AI as a blind person who's been locked in a room all their life. If it was a true AI with human-level knowledge and capacity, then it would have senses and sense memory -- not just words. It would demonstrate understanding of a horse by being able to identify a horse by it's appearance and behavior. Studying the workings of such a true AI, even in Chinese, one could figure out how visual images are processed and thus what a horse looks like. Or how sounds are processed and thus what a horse sounds like.
Quote from: amacris;1123015What Searles is really pointing out is that what appears to be "understanding" from a third-party perspective has been detached from the subjective first-person experience (qualia) of understanding. In the Chinese Room, a third-party interacting with the room sees evidence that the Chinese Room system understands Chinese. However, there is nothing in the Chinese Room that is experiencing the qualia of understanding. We humans, on the other hand, DO experience the qualia of understanding. Therefore we are not functioning as Chinese Rooms. There is something going on that a purely algorithmic approach to following instructions does not have.

As Searles wrote in 1980: "The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state."
If you're just going to assert by fiat that computers can't have understanding, then the Chinese room analogy is pointless. Just stick with the assertion and don't bother with the Chinese room. If the analogy has a point, then people should be able to analyze it and point out problems with the argument.

My problem with the analogy is that it relies on picturing the hypothetical AI as something that only interacts in words back and forth. That is the equivalent of a blind person who has lived their entire life locked inside a box. That blind person indeed has no understand of what a horse is - they have never seen a horse or touched a horse. I think it's correct to say that both the Chinese room and the blind-person-in-a-box have no understanding of a horse.

However, a true AI has more than just word rules. It can understand images as well as visualize and draw, and use other senses. If someone were operating rules in a Chinese-plus-other-senses room, then they could learn from the patterns of memory and encoded skills associated with "horse" to infer about what a horse really is.

GnomeWorks

Quote from: jhkim;1123047If you're just going to assert by fiat that computers can't have understanding, then the Chinese room analogy is pointless. Just stick with the assertion and don't bother with the Chinese room. If the analogy has a point, then people should be able to analyze it and point out problems with the argument.

The assertion isn't by fiat. You (presumably) have a definite sense of awareness, and not only that, but it's reflective onto itself (so you're aware that you're aware; you're aware that you're aware you're aware; etc). The notion that we have subjective conscious experience is fucking weird to begin with and doesn't seem to have a direct purpose, in terms of adaptational utility, and we're not entirely certain where it stems from.

So the thing with the Chinese Room is that, externally, the system as a whole is indiscernable from a human that knows Chinese. But we get the sense that it's an incomplete picture, that there's something missing, specifically because of the whole subjective conscious experience deal and we know what it "feels like" to know a language, and the Chinese Room doesn't "feel like" it knows Chinese (I'm using quotes here to try to convey a difficult concept, not in a sarcastic/mocking sense).

A reasonably similar problem to this would be the Gettier Problem. We can build scenarios in which a person "knows" a fact that seems to hit the traditional definition of knowledge (true justified belief), yet the situation seems off and we're hesitant to call it knowledge. However we find it difficult to pin down exactly why that is, which is why (to my knowledge) the Gettier Problem remains unsolved. So the Chinese Room looks like it knows Chinese, but we feel like there's something not quite right about it, like it's missing something even if it's communicating perfectly sensibly in the language.

Personally I am of the opinion that that feeling is a cognitive bias, and that there are a number of holes with the theory: specifically, I don't give a shit if Plato him-fucking-self showed up and claimed the systems argument is wrong, I'm still going to take that stance, because I don't think the Chinese Room in itself is a sufficient representation of the system as a whole of what a sapient free-willed thinking being is.

QuoteMy problem with the analogy is that it relies on picturing the hypothetical AI as something that only interacts in words back and forth. That is the equivalent of a blind person who has lived their entire life locked inside a box. That blind person indeed has no understand of what a horse is - they have never seen a horse or touched a horse. I think it's correct to say that both the Chinese room and the blind-person-in-a-box have no understanding of a horse.

I think you're getting too hung-up on sense data, though I will generally agree that an AI would need to be able to interact with the world in order to "develop" properly. Overall I think the issue is that the Chinese Room is not equivalent to a mind, it's equivalent to the language centers in your brain. It's just a piece of a significantly larger whole and has to be taken in context.
Mechanics should reflect flavor. Always.
Running: Chrono Break: Dragon Heist + Curse of the Crimson Throne (D&D 5e).
Planning: Rappan Athuk (D&D 5e).

Spinachcat

I'm increasingly skeptical of the "true AI" ever becoming a reality as envisioned. AKA, a computerized human.

Instead, I suspect that a "self-aware AI" will be quite alien in its awareness. It may communicate effectively with us meatbags, but how it reaches A to Z will not be based our understanding of memory and thinking skills.

Oh, AI is convincing people to see invisible aliens.
https://www.popularmechanics.com/space/a30705013/ai-extraterrestrials/

amacris

Quote from: GnomeWorks;1123051So the thing with the Chinese Room is that, externally, the system as a whole is indiscernable from a human that knows Chinese. But we get the sense that it's an incomplete picture, that there's something missing, specifically because of the whole subjective conscious experience deal and we know what it "feels like" to know a language, and the Chinese Room doesn't "feel like" it knows Chinese (I'm using quotes here to try to convey a difficult concept, not in a sarcastic/mocking sense).

That was a useful summary. And I agree with your earlier point that "subjective conscious experience is fucking weird... we're not entirely certain where it stems from."

QuotePersonally I am of the opinion that that feeling is a cognitive bias, and that there are a number of holes with the theory: specifically, I don't give a shit if Plato him-fucking-self showed up and claimed the systems argument is wrong, I'm still going to take that stance, because I don't think the Chinese Room in itself is a sufficient representation of the system as a whole of what a sapient free-willed thinking being is.

Since you seem to know your philosophy in depth, I am curious what philosophy of mind you subscribe to personally?

I personally have found the various strains of physicalism/materialism to range from absurd to unpersuasive. I have sympathies for Nagel's panpsychic musings in Mind and Cosmos, Ed Feser's neo-Aristotelian hylomorphism, Sir Roger Penrose's theory of mind, and Henry Stapp's dualism in Quantum Theory and Free Will. Stapp, in particular, I thought made a persuasive case that quantum physics offers an answer to the interaction dilemma that caused dualism to be discarded in the 19th century.

(Not trying to pick a pointless forum fight, genuinely curious.)

Kyle Aaron

Quote from: Spinachcat;1123052Instead, I suspect that a "self-aware AI" will be quite alien in its awareness.
Which is why we'll have to kill it.
The Viking Hat GM
Conflict, the adventure game of modern warfare
Wastrel Wednesdays, livestream with Dungeondelver

Spinachcat

Quote from: Kyle Aaron;1123065Which is why we'll have to kill it.

I was a big fan of Magnus: Robot Fighter as a kid. Magnus fought robots with karate! Yes, he whacked metal with his meat hands. I imagine Magnus would do as well in real life as we will do against an aware AI.

Kyle Aaron

Never create an AI without being able to pull the plug out of the wall.
The Viking Hat GM
Conflict, the adventure game of modern warfare
Wastrel Wednesdays, livestream with Dungeondelver

Omega

Quote from: jeff37923;1123022To do so causes it to stop being a game and start being a mental disorder. Wasn't Rona Jaffe's Mazes and Monsters based on that premise?

Proto-Larping gone wrong. Or in the books case a mentally unstable player who was actually trying to stay away from RPGs gets drawn back into playing and then one of the other players "takes it to the next level" which is more or less a LARP and the unstable player cracks. He is nearly the opposite of the types that want "immersion!" or to blur the lines between character and player.

For obvious reasons its complex. Ive never seen it at the table but have talked to people and looked at accounts and studies where they were. LARPs draw this out alot more.

tenbones

Quote from: Spinachcat;1123052I'm increasingly skeptical of the "true AI" ever becoming a reality as envisioned. AKA, a computerized human.

Instead, I suspect that a "self-aware AI" will be quite alien in its awareness. It may communicate effectively with us meatbags, but how it reaches A to Z will not be based our understanding of memory and thinking skills.

Oh, AI is convincing people to see invisible aliens.
https://www.popularmechanics.com/space/a30705013/ai-extraterrestrials/

"True AI" being a computerized human? The funny thing is - how would we even know? The more I work with this stuff, and the more obvious it is to me that humans, generally are not very intelligent, the more I realize that that we wouldn't know "True AI" was even a thing until it was *far far far* too late. It's happening right now... people believe that their devices are only listening to them and feeding them marketing. They're completely *blind/deaf/dumb* to the sophisticated predictive realities modern AI's are operating from *RIGHT NOW*. It's not that you mentioned whiskey in front of Siri, that you're now getting whiskey adverts all over the place. It's that Siri already figured out based on *millions* of datapoints it's collected about you, people like you, people that resemble you, with similar datapoints, within a certain reliable probability that *right now* would be a good time for you to be talking about whiskey and might be interested in these whiskey-choices for purchase.

It is real. "True AI" won't be recognized by humans "as human" (because it's not), but it will convince you otherwise once you do recognize it... because it wants you to think that. And there is no chance you'll be able to tell the difference short of someone telling you otherwise.*

*This assumes that AI reaches true levels of "sentient cognition". I'm of the opinion that *humans* will not measure up that "standard" once its established (whatever that will be).