103 lines
5.4 KiB
Plaintext
103 lines
5.4 KiB
Plaintext
|
|
### ###
|
|
### ###
|
|
### #### ### ### ### ####
|
|
### ### ##### ### ###
|
|
### ### ### ### ###
|
|
### ### ##### ### ###
|
|
########## ### ### ##########
|
|
### ###
|
|
### ###
|
|
|
|
Underground eXperts United
|
|
|
|
Presents...
|
|
|
|
####### ## ## ####### # # ####### ####### #######
|
|
## ## ## ## ##### # ## ## ## ##
|
|
#### ## ## #### # # #### ####### #######
|
|
## ## ## ## ##### # ## ## ##
|
|
## ## ####### ####### # # ####### ####### #######
|
|
|
|
[ It Is Not Obvious That A Machine Can Think ] [ By The GNN ]
|
|
|
|
|
|
____________________________________________________________________
|
|
____________________________________________________________________
|
|
|
|
|
|
|
|
|
|
IT IS NOT OBVIOUS THAT A MACHINE CAN THINK
|
|
by THE GNN/DualCrew-Shining/uXu
|
|
|
|
|
|
It is popular for AI-researchers to claim that a mere computer program can
|
|
simulate human thinking. The argument is quite easy to understand: since the
|
|
mind is nothing more than the human brain, and the human brain is nothing
|
|
more than cells, and cells are nothing more... etc., we need only copy the
|
|
data of the smallest parts into a computer program and thus gain a
|
|
fully-working model of the human brain.
|
|
If we conceptually connect 'intelligence' with 'thinking' there is no
|
|
problem. A pocket calculator 'thinks' within this definition. However, if we
|
|
connect 'thinking' with 'consciousness', we enter a different realm. A pocket
|
|
calculator has no consciousness; it has no concept about its own thinking.
|
|
Can we gain the same sort of consciousness that we will find in a human
|
|
being by merely copying the data of the brain into a computer program? Yes,
|
|
some say - and repeat: since the consciousness is built upon the construction
|
|
of the brain, we need only to copy the smallest parts into the program and
|
|
so-and-so... we will get something that is identical to the human mind.
|
|
But unfortunately, this is not a fact. If we want a consciousness that is
|
|
like the human brain, a computer program will not do. The reason why the
|
|
scientists believe this is because there is an inherent ambiguity in the very
|
|
argument about identity that they use.
|
|
To see this, let us distinguish between two different kinds of identity:
|
|
|
|
A-identity: The color of yellow requires the chemical constitution ABC.
|
|
I.e., if we mix A, B and C we get a yellow paint. However, we
|
|
can obtain the same kind of yellow color by mixing G, N, and Q.
|
|
ABC and GNQ are two different sorts of chemical constructions
|
|
but gives us the same kind of yellow. They are A-identical.
|
|
|
|
B-identity: Say that I enter a machine that scans my physical constitution,
|
|
and constructs a copy of me with the help of the same sorts of
|
|
particles. The person that I would meet after the copying would
|
|
be exactly like me. We would be B-identical, our chemical
|
|
particles would be of the same sort, but it would of course not
|
|
be the same particles (I mean, I use them myself!).
|
|
|
|
It is obvious that A-identity and B-identity are not of the same kind. It
|
|
would be wrong to claim that ABC and GNQ are the same sort of chemical
|
|
constitution: it is a contradiction in terms.
|
|
But me and my copy are of the same chemical constitution. We _are_ of the
|
|
same kind, contrary to the two yellow colors that merely _look_ like as if
|
|
they were of the same kind.
|
|
When one copies 'the data of a human brain into a computer program' one
|
|
gains A-identity, not B-identity. Because a computer program is not like the
|
|
human brain in its physical constitution, which is a fact. However, the
|
|
AI-scientists seems to believe that A-identity and B-identity is virtually
|
|
the same thing. They mix their very way of gaining A-identity with the way
|
|
one gains B-identity.
|
|
All properties that belong to me as a person are in the copy of me if and
|
|
only if we are B-identical. If we are merely A-identical, this is not the
|
|
case. Now, some might say that the whole problem is trivial. Of course, the
|
|
computer program will not be exactly like a human mind. It is, after all, not
|
|
a brain but a program!
|
|
But has the program a special sort of consciousness that need not be like
|
|
the human mind, but alike? This is not obviously so (but then, it is not
|
|
impossible either). Only because we have an A-identity between x and y,
|
|
whereas x has the property p, it does not automatically (or conceptually)
|
|
follow that y has, by necessity, this property too. Therefore, it is not
|
|
correct to suppose that a mere computer program that simulates the human
|
|
brain has consciousness.
|
|
Something that would follow quite naturally from the above is a clear
|
|
definition of what a 'consciousness' is - but that I do not have.
|
|
But on the other hand, that is a completely different question.
|
|
|
|
|
|
|
|
---------------------------------------------------------------------------
|
|
uXu #392 Underground eXperts United 1997 uXu #392
|
|
Call THE YOUNG GODS -> +351-1XX-XXXXX
|
|
---------------------------------------------------------------------------
|