292 lines
20 KiB
Plaintext
292 lines
20 KiB
Plaintext
|
||
_
|
||
| \
|
||
| \
|
||
| | \
|
||
__ | |\ \ __
|
||
_____________ _/_/ | | \ \ _/_/ _____________
|
||
| ___________ _/_/ | | \ \ _/_/ ___________ |
|
||
| | _/_/_____ | | > > _/_/_____ | |
|
||
| | /________/ | | / / /________/ | |
|
||
| | | | / / | |
|
||
| | | |/ / | |
|
||
| | | | / | |
|
||
| | | / | |
|
||
| | |_/ | |
|
||
| | | |
|
||
| | c o m m u n i c a t i o n s | |
|
||
| |________________________________________________________________| |
|
||
|____________________________________________________________________|
|
||
|
||
...presents... Can There Be Artificial Intelligence?
|
||
by Tequila Willy
|
||
|
||
>>> a cDc publication.......1994 <<<
|
||
-cDc- CULT OF THE DEAD COW -cDc-
|
||
____ _ ____ _ ____ _ ____ _ ____
|
||
|____digital_media____digital_culture____digital_media____digital_culture____|
|
||
|
||
Since the dawn of history, men have dreamed of other forms of intelligent
|
||
life. There is something within the nature of mankind to reach out, to become
|
||
like gods. Today, in our technologically advanced society, the potential is
|
||
right around the corner. Even in this modern world of technological marvels,
|
||
there are many hurdles to overcome. If the technological boundaries are
|
||
overcome, there are still those who believe that no man-made device will think
|
||
like a human being. The doubts of the unbelievers should fade from the
|
||
memories of the human race as the first machines begin to think.
|
||
|
||
Philosophers and scientists alike have been questing for artificial
|
||
intelligence. It has only been in this century that the goal has been at least
|
||
feasible. When looking for artificial intelligence, a researcher must look
|
||
inward before starting anything else. The human mind and soul are two things
|
||
that we have very little knowledge of. The way our brains work, and why we are
|
||
able to think are some of the most important things in our society. When we
|
||
are born, we are self-aware, and it is a general belief that self-awareness is
|
||
only possible in humans because of the way we are born. In fact, most people
|
||
never pay any attention to what happens during the gestation period. It has
|
||
been shown that once the brain is developed, it immediately starts to process
|
||
information. It cannot be proven in either direction that there is self-
|
||
awareness at that state. It could turn out, in the end, that computers
|
||
designed for thought will have to go through a gestation process, and learn
|
||
just like a child. The opposition to the theory of artificial intelligence,
|
||
and the arguments against AI have helped to move research ahead by pointing out
|
||
flaws in AI theories.
|
||
|
||
There is a lot of technical material presented below, so there are some
|
||
terms that should be explained before continuing. The first term is serial
|
||
computer, which is a computer that is distinguished by its capability to handle
|
||
only one operation at a time. The second term is parallel processing, which is
|
||
a method of computing where more than one operation can be handled at a time
|
||
(Churchland and Churchland 35). For example, give a task to two computers, one
|
||
parallel processing, and one serial processing; the serial computer attacks the
|
||
problem one step at a time, taking a large amount of time while the computer
|
||
that is capable of parallel processing breaks the task down into simpler
|
||
operations and then executes the task concurrently with other nodes of the
|
||
processor. The result is that the parallel processor, being able to do many
|
||
things at a time, finishes the task in a fraction of the time. The third term
|
||
that is used often is a computing architecture known as neural networks. A
|
||
neural network is a system of processors, or nodes of a processor, linked to
|
||
other nodes in the way neurons in the brain are connected (35). The way that
|
||
the neural network works is that the strength of the connections made from the
|
||
input of the network to the output of the network allows a more humanlike
|
||
ability to operate in a more than binary basis (35). To clarify, a human
|
||
neuron is capable of firing its electrical charge at many different levels,
|
||
with each level signifying a different thing (36). This allows a greater
|
||
variety in the amount and variety of information that the computer can pass
|
||
along. The next term used in this paper is classical artificial intelligence
|
||
(or AI for short). Classical artificial intelligence was the school of AI that
|
||
felt that given a powerful enough computer and the properly crafted programs,
|
||
you could get a machine that would be able to think (34).
|
||
|
||
John R. Searle, in his essay from the _Scientific American_ from January
|
||
1990, writes that machines, no matter what the power, or their internal
|
||
architecture will not be able to think. His main argument is what he calls his
|
||
Chinese room experiment (Searle 26). The experiment goes like this: first you
|
||
lock some person in a room where there is a door with two mail slots, one in
|
||
and one out. Into the room now and then come a pile of Chinese symbols in
|
||
through the slot. The person inside the room also has a rule book that
|
||
explains, in a language he understands, what he should do with the symbols
|
||
coming into the room, and having used the rule book to manipulate the symbols,
|
||
he drops the rearranged symbols down through the out slot (26). His point,
|
||
using this example, is that if he does not understand Chinese because of
|
||
running a computer program for understanding Chinese, then another computer
|
||
wouldn't either. This means that simply manipulating symbols isn't enough to
|
||
create cognition, or thinking, therefore, according to him, making it
|
||
impossible for a computer to think (26).
|
||
|
||
He then breaks down his arguments into axioms that he draws his
|
||
conclusions from. The first axiom is this: "Computer programs are formal
|
||
(syntactic)" (27). Syntactic means purely formal. He explains the axiom
|
||
further by an example, "A computer processes information by first encoding it
|
||
into the symbolism that the computer uses and then manipulating the symbols
|
||
through a set of precisely stated rules. These rules constitute the program"
|
||
(27). Before introducing his second axiom he points out that symbols and
|
||
computer programs are abstract entities. In computers the symbols can stand
|
||
for anything the programmer wants. So, according to Searle, the program has
|
||
syntax, yet it doesn't have semantics. This leads to his next axiom, which is:
|
||
"Human minds have mental contents (semantics)" (27). His third axiom is this:
|
||
"Syntax by itself is neither constitutive of nor sufficient for minds" (27).
|
||
His explanation of that axiom is quite simple. He says that merely
|
||
manipulating symbols is not enough to guarantee knowledge of what they mean.
|
||
Later in his paper he poses another axiom, "Brains cause minds" (29). In other
|
||
words that thought is dependent on the biological processes of the human brain.
|
||
|
||
The first conclusion that he draws from his axioms is "Programs are
|
||
neither constitutive of nor sufficient for minds" (27). This conclusion is
|
||
pretty clear, saying that computers are incapable of having minds. The second
|
||
conclusion is: "Any other system capable of causing minds would have to have
|
||
causal powers equivalent to those of brains" (29). His example of the
|
||
conclusion states that for an electrical engine to drive a car as fast as a gas
|
||
engine the electrical engine must produce an energy output at least as high as
|
||
a gas engine (29). His third conclusion is that "Any artifact that produced
|
||
mental phenomena, any artificial brain, would have to be able to duplicate the
|
||
specific causal powers of brains, and it could not do that by simply running a
|
||
program" (29). The fourth conclusion that he draws from his axioms is this:
|
||
"The way that human brains actually produce mental phenomena cannot be solely
|
||
by virtue of running a computer program" (29).
|
||
|
||
The argument presented by John M. Searle is quite formidable, with his
|
||
Chinese room example, and then the arguments that he goes on to present. Some
|
||
of the conclusions and axioms, however, although they look sound at first, are
|
||
deceptively untrue. An analysis of the arguments will show that they are
|
||
faulty.
|
||
|
||
First, Searle's Chinese room example only applies to symbol-manipulating
|
||
computers. In S-M machines the prospect of one ever being able to think is
|
||
highly doubtful, only because their architecture is incomparable to human brain
|
||
structure. The human brain is the only thing we know to definitely possess
|
||
intelligence. The problem with Searle's Chinese room example, at least in
|
||
reference to parallel processing and neural networked machines is that they
|
||
don't work the way that S-M machines work. They use a method of processing
|
||
called vector processing (Churchland and Churchland 36). The way that it works
|
||
is that when you send a combination of neural activations on one level of the
|
||
net, it will pass through the network on certain vectors caused by the
|
||
activation pattern and then output in another unique pattern (36). This
|
||
process is much like the way that the human brain is believed to work. This
|
||
type of processing is such that symbols are never manipulated in the fashion
|
||
that is presented in the Chinese room argument. Symbol manipulation in a
|
||
vector-processing system may or may not be one of the cognitive skills that it
|
||
may display as a characteristic (36). Therefore, the Chinese room is
|
||
non-applicable to the argument. Searle argues against parallel processing by
|
||
presenting what he calls a Chinese gymnasium (Searle 28). The gist of the
|
||
example is instead of the one man in the room, the room is full of men in a
|
||
parallel architecture. He explains that none of them understands Chinese, and
|
||
the only thing accomplished by the extra men is that it would output faster,
|
||
without any comprehension (28). The problem with this argument is that it is
|
||
unnecessary that the individual men need to know Chinese, as a single neuron
|
||
doesn't know any language either, but the whole thing probably does (Churchland
|
||
and Churchland 37). For his Chinese gymnasium example to be fair there would
|
||
have to be the entire populations of 10,000 Earths in the gym (37). There is
|
||
no way to prove there is no comprehension of Chinese in a network of that
|
||
magnitude. Essentially what you would have in a room that size, with that many
|
||
people, is a gigantic, slow brain. Mr. Searle argues against this view by
|
||
saying that it really doesn't matter, if nobody understands Chinese, neither
|
||
will the entire system (Searle 29). The answer to that objection is that it is
|
||
possible, with the right architecture, to teach a computer Chinese. If the
|
||
computer's structure was brainlike, the computer would be no different from a
|
||
Chinese child learning to communicate.
|
||
|
||
Searle's arguments for not believing that computers are capable of human
|
||
thought are based on several simple axioms that he believes are true in all
|
||
types of computers. The axioms he presents are sound. All, except the last
|
||
one, which was, "Brains cause minds" (29). In that axiom he declares that
|
||
minds are only capable of existing in brains, because brains are a biological
|
||
organ, with neurotransmitters, etc... (29). This premise is not necessarily
|
||
true. For example, in the Churchland article, they present an example of how
|
||
that axiom is not true. Carver A. Mead, a researcher at the California
|
||
Institute of Technology, and his colleagues used analog VLSI (Very Large Scale
|
||
Integration) techniques to build an artificial retina (Churchland and
|
||
Churchland 37). The machine is not a computer simulation of a retina, but an
|
||
actual real-time information processing unit that responds to light (37). The
|
||
circuitry is based on the actual organ in a cat, and the output is incredibly
|
||
similar to the actual output of the cat's retina (37). The process that is
|
||
used is completely without neurochemicals, so there really is no need for them,
|
||
hence the supposition that a mind can only exist in a brain is absurd.
|
||
|
||
The conclusions that he draws from those axioms are not without flaws.
|
||
His first conclusion is that "Programs are neither constitutive nor sufficient
|
||
for minds" (Searle 29). In a standard sense, it is probably the correct
|
||
conclusion, at least for the classical AI. The new artificial intelligence,
|
||
however, is a merging of hardware and software in a synergistic relationship,
|
||
so programs will not solely handle the challenge of intelligence, but the
|
||
software will play a significant part in it. If you look at the rest of his
|
||
conclusions, you will find that they are really only applicable to formal
|
||
programs alone, not software/hardware synergies, so they must be irrelevant to
|
||
the argument. With his second conclusion, he essentially agrees that there is
|
||
a very real possibility of an artificial intelligence, as long as its causal
|
||
powers are at least that of the brain. Modeling computers after the human
|
||
brain makes it probable that it can be done.
|
||
|
||
It is improbable that there will be any thinking machines for many years.
|
||
The future holds many keys to this process. It is necessary there be a greater
|
||
understanding of the mechanics of thought and memory before this end is
|
||
possible. Classical artificial intelligence is obviously not going to work,
|
||
for the reasons stated earlier in the paper. The answer obviously lies in the
|
||
realm of parallel processing and neural networks. It has been proven that very
|
||
complicated and fast matrices of electronics can replicate biological
|
||
functioning, as in the example of the artificial retinas (Churchland and
|
||
Churchland 37). Where the possibility lies is in the realm of combining the
|
||
processing abilities of complex computer architectures and the increasingly
|
||
sophisticated software needed to harness this power.
|
||
|
||
We may find a solution within the psychology of childhood development.
|
||
When a child is born it is a blank slate. In essence, they do not have any
|
||
real formed concepts, like those of syntax and semantics. This is the way that
|
||
we should perceive a newly made computer of the kind that represents the human.
|
||
Everything must start from scratch, therefore it is necessary to teach the
|
||
computer as you would a baby. This process is harder than teaching a newborn
|
||
child, since they are born with cognizance, but with time and knowledge of what
|
||
a computer needs to learn to become self-aware, it is possible. There are
|
||
currently experiments going on where a doctor and an army of assistants are
|
||
building a base of language, and entering it, with referents to what they mean,
|
||
into the computer. They are essentially teaching the computer manually what is
|
||
normally experienced by a child. For example, a single word can have immense
|
||
amounts of referents, such as: what it is, what it can be compared to, and what
|
||
connotations are generally associated with them. A word like "duck" for
|
||
example, could take weeks of compiling information, since you have to not only
|
||
put the concept of "duck" together, but also that of a bird, of colors, of
|
||
feathers, the basics of anatomy, and popular notions associated with the word
|
||
"duck." With each layer of explanations you encounter you find a whole new
|
||
level of terms to define. It is well-known that even the least intelligent
|
||
human being carries around a simply astonishing amount of information. The
|
||
hardest things to define are on the simplest level of understanding, the
|
||
general hope of researchers is that with enough of the complex composite
|
||
concepts, the computer will be able to use the whole of its knowledge to puzzle
|
||
out the simple pieces. This idea seems entirely logical, since it is something
|
||
that human beings try to every single day. Humans are the same in that
|
||
respect, if we knew these simple truths, all philosophers and other scientists
|
||
would be simply unnecessary, as we would know all those things. To date, the
|
||
scientists trying this experiment have succeeded in inputting almost all the
|
||
knowledge that an average 3 year old child has. The strange thing is that in a
|
||
system like this, the computer seems to have a curious nature. This would lead
|
||
one to think that the machine were cognizant, although in reality it most
|
||
probably is not the case. The programs that compose this machine are simply
|
||
calling for more input to make it run more efficiently. Although this is not
|
||
real thought yet one would suppose that this will be possible when the
|
||
computer's electronic architecture is sufficient to begin to change its own
|
||
programs. That means that it would be working enough like a brain to revise
|
||
its beliefs, since beliefs are nothing less than knowledge in itself.
|
||
|
||
The brain is a gigantic scale information processing machine, which is
|
||
simply a biological form of computer. The implications of this call for a
|
||
rational person to assume if it is possible for a biological machine to think,
|
||
it would follow there would be a machine of a non-biological (ie. electronic)
|
||
nature that would be able to think, at least it would be if the electronic
|
||
brain was built to the equivalent of a human brain.
|
||
|
||
Technology has increased exponentially in the last thirty years, but we
|
||
are still many years away from the first truly cognizant machines. Because of
|
||
the arguments brought up, it is really impossible to prove there will be
|
||
cognizant machines, at least in a deductive sense. In an inductive sense it
|
||
could be said there is a strong probability there will be a day when there will
|
||
be an intelligent machine. It has been proven that the answer definitely does
|
||
not lie in the realm of computer programs in the manner of classical artificial
|
||
intelligence, since the computer architecture that is necessary for thought is
|
||
simply impossible in the traditional symbol-manipulating machine. That part of
|
||
the argument is not in doubt, it is when you come into the hardware/software
|
||
synergy arena that the battle becomes heated. Mr. Searle presents some very
|
||
strong arguments against the possibility, but these arguments are not
|
||
sufficient to destroy the possibility of computer thought. In a case of
|
||
predicting the future there can be no definite proof, but if science and
|
||
technology can raise to the challenge of replicating the function of a human
|
||
brain, there will be, eventually, a computer that can think.
|
||
|
||
|
||
Works Cited:
|
||
|
||
Churchland, Paul and Churchland, Patricia. "Could A Machine Think?"
|
||
_Scientific American_ Jan. 1990: 32-37.
|
||
|
||
Searle, John M. "Is A Brain's Mind a Computer Program?" _Scientific American_
|
||
Jan. 1990: 26-31.
|
||
_______ __________________________________________________________________
|
||
/ _ _ \|Demon Roach Undrgrnd.806/794-4362|Kingdom of Shit.....806/794-1842|
|
||
((___)) |Cool Beans!..........415/648-PUNK|Polka AE {PW:KILL}..806/794-4362|
|
||
[ x x ] |Metalland Southwest..713/579-2276|ATDT East...........617/350-STIF|
|
||
\ / |The Works............617/861-8976|Ripco ][............312/528-5020|
|
||
(' ') | Save yourself! Go outside! DO SOMETHING! |
|
||
(U) |==================================================================|
|
||
.ooM |Copyright (c) 1994 cDc communications and Tequila Willy. |
|
||
\_______/|All Rights Reserved. 11/01/1994-#289|
|
||
|
||
<EFBFBD><EFBFBD> |