At what point will computers become sentient? At what point
will they become sapient? What’s the difference?
Webster’s defines sentient as “1: Responsiveness to or
conscious of sense impressions. 2: Aware. 3: Finely sensitive in perception or
feelings”.
Webster’s defines sapient as “possessing or expressing great
sagacity” (wisdom).
Science fiction stories speculate about humanity meeting a
sentient or a sapient species. Some stories use the two terms interchangeably,
but they are quite different. A sentient species is one that is aware of their
environment. They are sensitive to their surroundings and other organisms
around them. As a standard, this is not high bar. There are even plants that
can sense their surroundings and react accordingly. They could be called
sentient.
Meeting alien life that is sapient is where things get
interesting. This is where things like the “Prime Directive” in Star Trek
would come into play. The idea that sapient life should be allowed to evolve
without interference from the more advanced civilizations seems like a
reasonable rule. The scientific name of our species would indicate that
we are sapient. Homo sapiens: Homo meaning Man and sapiens meaning wise.
There are other extinct species of the genus Homo: Homo habilis, Homo
erectus, Homo neanderthalensis, but the one that survived is Homo
sapiens: the wise humans (well, that’s debatable). It is all well and good for
us to call ourselves wise, but this brings up a problem. Alien life is, well,
alien. We have our own idea about what wisdom is, that is, we think it’s like
us. But what if it is not like us. Will we recognize wisdom if it is radically
different?
The same could be said of artificial intelligence. We have
the Turing Test, a hypothetical test designed by computer legend Alan Turing.
In this test, a person would be in a blind communication with two others, one
human and one machine. If the person could not reliably tell which one was the
machine, the machine would then have passed the test. But what does that really
tell us? It might just tell us that this machine could imitate humans. Is that
sapience? Like the alien, machine intelligence might develop in a way so
foreign to us (or so advanced) as to be unrecognizable. The fact that humans
designed it might not make a difference. It seems to me after attaining a
certain level of complexity, one has to assume that artificial intelligence
will evolve just like life evolves, that is, unpredictably (although much
faster).
What makes a being (natural or machine) a person? Is it
sapience? Is it self-awareness? Webster’s notwithstanding, do we really even
know what those terms mean when we are talking about something alien to
ourselves? Advanced artificial intelligence won’t think like we do and
therefore won’t act like we expect. We
and AI will not be able to conceive of ourselves, the world, or each other in
the same way. If the day ever comes when artificial intelligence exterminates the
human race, it won’t be because the machines are evil. It will likely just be a
simple misunderstanding.
Comments
Post a Comment