Science fiction has given us plenty of stories about robots
or computers that become self-aware and go on a rampage at the expense of
humanity. As we seem to be making more advances in the realm of artificial
intelligence, this seems like a valid thing to worry about. But science fiction
has also given us the Three Laws of Robotics. Isaac Asimov came up with the Three
Laws of Robotics in an early science fiction story he wrote. He liked it so
well that he used it as a fundamental background in all his future stories
about robots. The three laws caught on, and now it is not just science fiction
people who talk about it. It seems to be taken seriously in popular culture,
not only for robot but for artificial intelligence in general.
Asimov’s three laws of robotics:
1.
A robot may not injure a human being or, through
inaction, allow a human being to come to harm.
2.
A robot must obey orders given it by human
beings except where such orders would conflict with the First Law.
3.
A robot must protect its own existence as long
as such protection does not conflict with the First or Second Law.
The three laws have been used in
the Star Trek canon as well as other science fiction stories. They are famous. But
Asimov’s laws are really only “laws” in fiction. There is no convention of
roboticists that have adopted them. Since Asimov came up with his laws, there
have been others who have proposed rules and principals of robotics, but
Asimov’s rules are the ones people remember.
If you are a human, Asimov’s laws
are nice and comforting. It makes robots safe. The problem is: how do you
hardwire these rules into a robot (or an AI)? Nobody has figured out how to do
that. How do you write a program that encompasses all variations of “doing harm?”
And how is it to be determined if a robot is doing harm? Of course, it is easy
to say “I know it when I see it.” But you can’t write “I know it when I see” it
into code. Also, if an artificial intelligence is self-aware, (which is really
what we are talking about here) there is nothing to keep it from making its own
rules, rewriting its own programs.
Comforting as Asimov’s laws are,
we may have to come up with other ways to keep us safe from rogue robots.
Comments
Post a Comment