An even more radical approach to the problem: “Can a machine think?” is that of the American philosopher John Searle, according to whom it is however impossible for a machine to think. For this purpose he invented in 1980 the “Paradox of the Chinese room”.

Imagine that a person who does not know Chinese is locked in a room with a set of rules, written in his mother language, to sort in a certain way the characters of the Chinese language. These rules, if carried out scrupulously, make it possible to respond satisfactorily to every possible question. In the room are introduced sheets with questions written in Chinese. Using the instructions written in his language, the person in the room can fill in sheets with answers in Chinese.

John Searle

Those who are outside the room and see the answers correctly formulated in Chinese, will imagine that a person who knows Chinese is in the room. Those inside the room, however, know very well that they do not know Chinese. According to Searle, therefore, even if one day there will be a machine that gives us the impression of being able to think, entertaining a discussion with us, we can not conclude that it is actually thinking, because it will only do a series of guided operations, just like the Chinese fake. However, this machine will still lack what Searle calls “mental content”, a concept similar to that of “consciousness”.

The moral of Searle’s example is clear, the rules of a program give us only a syntax and this is insufficient to pass to a semantics, the program can simulate but is incapable of understanding and having mental states and events; the device is incapable of assigning a semantic content to the symbols it manipulates. Searle’s explanation is that the human mind is inherently capable of producing intentional mental states, this intentionality is original unlike that simulated by a program that is derived or “borrowed” from us.

We therefore place a machine in intentional domination, not because it has particular intrinsic characteristics, but because attributing to it intentional predicates helps us to prevent its behavior; a car will then be a person, when will be liable to receive a certain attitude. Intelligence, however, is also a status quo that is assigned, and in order to be intelligent, a machine must be accepted by us, socialized, assuming a role similar to that of other individuals in our community of life.

Until now we have investigated in scientific and philosophical terms how we can define a “thinking” machine according to Turing and how to give credit to the contrary evidence of Searle.

The question that arises is therefore: what kind of definition can we give to the term Human? What are the categories? And why? But we will deal with this in the next article, see ya 😉

7 votes, average: 5.00 out of 57 votes, average: 5.00 out of 57 votes, average: 5.00 out of 57 votes, average: 5.00 out of 57 votes, average: 5.00 out of 5 (7 votes, average: 5.00 out of 5)
You need to be a registered member to rate this.
(420 total tokens earned)
Loading...

Responses