Isaac Asimov - Robot Visions.pdf
(
593 KB
)
Pobierz
238983095 UNPDF
Robot Visions - Isaac Asimov
ISAAC ASIMOV
ROBOT VISIONS
ILLUSTRATIONS BY
RALPH McQUARRIE
To Gardner Dozois and Stan Schmidt, colleagues and friends
CONTENTS
Introduction: The Robot Chronicles
STORIES
Robot Visions
Too Bad!
Robbie
Reason
Liar!
Runaround
Evidence
Little Lost Robot
The Evitable Conflict
Feminine Intuition
The Bicentennial Man
Someday
Think!
Segregationist
Mirror Image
Lenny
Galley Slave
Christmas Without Rodney
ESSAYS
Robots I Have Known
The New Teachers
Whatever You Wish
The Friends We Make
Our Intelligent Tools
The Laws Of Robotics
Future Fantastic
The Machine And The Robot
The New Profession
The Robot As Enemy?
Intelligences Together
My Robots
The Laws Of Humanics
Cybernetic Organism
The Sense Of Humor
Robots In Combination
Introduction:
The Robot Chronicles
What is a robot? We might define it most briefly and comprehensively as “an
artificial object that resembles a human being.”
When we think of resemblance, we think of it, first, in terms of appearance. A
robot looks like a human being.
It could, for instance, be covered with a soft material that resembles human
skin. It could have hair, and eyes, and a voice, and all the features and
appurtenances of a human being, so that it would, as far as outward appearance
is concerned, be indistinguishable from a human being.
This, however, is not really essential. In fact, the robot, as it appears in
science fiction, is almost always constructed of metal, and has only a
stylized resemblance to a human being.
Suppose, then, we forget about appearance and consider only what it can do. We
think of robots as capable of performing tasks more rapidly or more
efficiently than human beings. But in that case any machine is a robot. A
sewing machine can sew faster than a human being, a pneumatic drill can
penetrate a hard surface faster than an unaided human being can, a television
set can detect and organize radio waves as we cannot, and so on.
We must apply the term robot, then, to a machine that is more specialized than
an ordinary device. A robot is a computerized machine that is capable of
performing tasks of a kind that are too complex for any living mind other than
that of a man, and of a kind that no non-computerized machine is capable of
performing.
In other words to put it as briefly as possible:
robot = machine + computer
Clearly, then, a true robot was impossible before the invention of the
computer in the 1940s, and was not practical (in the sense of being compact
enough and cheap enough to be put to everyday use) until the invention of the
microchip in the 19708.
Nevertheless, the concept of the robot-an artificial device that mimics the
actions and, possibly, the appearance of a human being-is old, probably as old
as the human imagination.
The ancients, lacking computers, had to think of some other way of instilling
quasi-human abilities into artificial objects, and they made use of vague
supernatural forces and depended on god-like abilities beyond the reach of
mere men.
Thus, in the eighteenth book of Homer’s Iliad, Hephaistos, the Greek god of
the forge, is described as having for helpers, “a couple of maids...made of
gold exactly like living girls; they have sense in their heads, they can speak
and use their muscles, they can spin and weave and do their work....” Surely,
these are robots.
Again, the island of Crete, at the time of its greatest power, was supposed to
possess a bronze giant named Talos that ceaselessly patrolled its shores to
fight off the approach of any enemy.
Throughout ancient and medieval times, learned men were supposed to have
created artificially living things through the secret arts they had learned or
uncovered—arts by which they made use of the powers of the divine or the
demonic.
The medieval robot-story that is most familiar to us today is that of Rabbi
Loew of sixteenth-century Prague. He is supposed to have formed an artificial
human being—a robot—out of clay, just as God had formed Adam out of clay. A
clay object, however much it might resemble a human being, is “an unformed
substance” (the Hebrew word for it is “golem”), since it lacks the attributes
of life. Rabbi Loew, however, gave his golem the attributes of life by making
use of the sacred name of God, and set the robot to work protecting the lives
of Jews against their persecutors.
There was, however, always a certain nervousness about human beings involving
themselves with knowledge that properly belongs to gods or demons. There was
the feeling that this was dangerous, that the forces might escape human
control. This attitude is most familiar to us in the legend of the “sorcerer’s
apprentice,” the young fellow who knew enough magic to start a process going
but not enough to stop it when it had outlived its usefulness.
The ancients were intelligent enough to see this possibility and be frightened
by it. In the Hebrew myth of Adam and Eve, the sin they commit is that of
gaining knowledge (eating of the fruit of the tree of knowledge of good and
evil; i.e., knowledge of everything) and for that they were ejected from Eden
and, according to Christian theologians, infected all of humanity with that
“original sin.”
In the Greek myths, it was the Titan, or Prometheus, who supplied fire (and
therefore technology) to human beings and for that he was dreadfully punished
by the infuriated Zeus, who was the chief god.
In early modern times, mechanical clocks were perfected, and the small
mechanisms that ran them (“clockwork”)—the springs, gears, escapements,
ratchets, and so on—could also be used to run other devices.
The 1700s was the golden age of “automatons.” These were devices that could,
given a source of power such as a wound spring or compressed air, carry out a
complicated series of activities. Toy soldiers were built that would march;
toy ducks that would quack, bathe, drink water, eat grain and void it; toy
boys that could dip a pen into ink and write a letter (always the same letter,
of course). Such automata were put on display and proved extremely popular
(and, sometimes, profitable to the owners).
It was a dead-end sort of thing, of course, but it kept alive the thought of
mechanical devices that might do more than clockwork tricks, that might be
more nearly alive.
What’s more, science was advancing rapidly, and in 1798, the Italian
anatomist, Luigi Galvani, found that under the influence of an electric spark,
dead muscles could be made to twitch and contract as though they were alive.
Was it possible that electricity was the secret of life?
The thought naturally arose that artificial life could be brought into being
by strictly scientific principles rather than by reliance on gods or demons.
This thought led to a book that some people consider the first piece of modern
science fiction—Frankenstein by Mary Shelley, published in 1818.
In this book, Victor Frankenstein, an anatomist, collects fragments of freshly
dead bodies and, by the use of new scientific discoveries (not specified in
the book), brings the whole to life, creating something that is referred to
only as the “Monster” in the book. (In the movie, the life principle was
electricity.)
However, the switch from the supernatural to science did not eliminate the
fear of the danger inherent in knowledge. In the medieval legend of Rabbi
Loew’s golem, that monster went out of control and the rabbi had to withdraw
the divine name and destroy him. In the modern tale of Frankenstein, the hero
was not so lucky. He abandoned the Monster in fear, and the Monster, with an
anger that the book all but justifies, in revenge killed those Frankenstein
loved and, eventually, Frankenstein himself.
This proved a central theme in the science fiction stories that have appeared
since Frankenstein. The creation of robots was looked upon as the prime
example of the overweening arrogance of humanity, of its attempt to take on,
through misdirected science, the mantle of the divine. The creation of human
life, with a soul, was the sole prerogative of God. For a human being to
attempt such a creation was to produce a soulless travesty that inevitably
became as dangerous as the golem and as the Monster. The fashioning of a robot
was, therefore, its own eventual punishment, and the lesson, “there are some
things that humanity is not meant to know,” was preached over and over again.
No one used the word “robot,” however, until 1920 (the year, coincidentally,
in which I was born). In that year, a Czech playwright, Karel Capek, wrote the
play R.U.R., about an Englishman, Rossum, who manufactured artificial human
beings in quantity. These were intended to do the arduous labor of the world
so that real human beings could live lives of leisure and comfort.
Capek called these artificial human beings “robots,” which is a Czech word for
“forced workers,” or “slaves.” In fact, the title of the play stands for
“Rossum’s Universal Robots,” the name of the hero’s firm.
In this play, however, what I call “the Frankenstein complex” was made several
notches more intense. Where Mary Shelley’s Monster destroyed only Frankenstein
and his family, Capek’s robots were presented as gaining emotion and then,
resenting their slavery, wiping out the human species.
The play was produced in 1921 and was sufficiently popular (though when I read
it, my purely personal opinion was that it was dreadful) to force the word
“robot” into universal use. The name for an artificial human being is now
“robot” in every language, as far as I know.
Through the 1920s and 1930s, R U.R. helped reinforce the Frankenstein complex,
and (with some notable exceptions such as Lester del Rey’s “Helen O’Loy” and
Eando Binder’s “Adam Link” series) the hordes of clanking, murderous robots
continued to be reproduced in story after story.
I was an ardent science fiction reader in the 1930s and I became tired of the
ever-repeated robot plot. I didn’t see robots that way. I saw them as
machines—advanced machines —but machines. They might be dangerous but surely
safety factors would be built in. The safety factors might be faulty or
inadequate or might fail under unexpected types of stresses, but such failures
could always yield experience that could be used to improve the models.
After all, all devices have their dangers. The discovery of speech introduced
communication—and lies. The discovery of fire introduced cooking—and arson.
The discovery of the compass improved navigation—and destroyed civilizations
in Mexico and Peru. The automobile is marvelously useful—and kills Americans
by the tens of thousands each year. Medical advances have saved lives by the
million&—and intensified the population explosion.
In every case, the dangers and misuses could be used to demonstrate that
“there are some things humanity was not meant to know,” but surely we cannot
be expected to divest ourselves of all knowledge and return to the status of
the australopithecines. Even from the theological standpoint, one might argue
that God would never have given human beings brains to reason with if He
hadn’t intended those brains to be used to devise new things, to make wise use
of them, to install safety factors to prevent unwise use—and to do the best we
can within the limitations of our imperfections.
So, in 1939, at the age of nineteen, I determined to write a robot story about
a robot that was wisely used, that was not dangerous, and that did the job it
was supposed to do. Since I needed a power source I introduced the “positronic
brain.” This was just gobbledygook but it represented some unknown power
source that was useful, versatile, speedy, and compact—like the as-yet
uninvented computer.
The story was eventually named “Robbie,” and it did not appear immediately,
but I proceeded to write other stories along the same line—in consultation
with my editor, John W. Campbell, Jr., who was much taken with this idea of
mine—and eventually, they were all printed.
Campbell urged me to make my ideas as to the robot safeguards explicit rather
than implicit, and I did this in my fourth robot story, “Runaround,” which
appeared in the March 1942 issue of Astounding Science Fiction. In that issue,
on page 100, in the first column, about one-third of the way down (I just
happen to remember) one of my characters says to another, “Now, look, let’s
start with the Three Fundamental Rules of Robotics.”
This, as it turned out, was the very first known use of the word “robotics” in
print, a word that is the now-accepted and widely used term for the science
and technology of the construction, maintenance, and use of robots. The Oxford
English Dictionary, in the 3rd Supplementary Volume, gives me credit for the
invention of the word.
I did not know I was inventing the word, of course. In my youthful innocence,
I thought that was the word and hadn’t the faintest notion it had never been
used before.
“The three fundamental Rules of Robotics” mentioned at this point eventually
became known as “Asimov’s Three Laws of Robotics,” and here they are:
1. A robot may not injure a human being, or, through inaction, allow a human
being to come to harm.
2. A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.
Those laws, as it turned out (and as I could not possibly have foreseen),
proved to be the most famous, the most frequently quoted, and the most
influential sentences I ever wrote. (And I did it when I was twenty-one, which
makes me wonder if I’ve done anything since to continue to justify my
existence.)
My robot stories turned out to have a great effect on science fiction. I dealt
with robots unemotionally—they were produced by engineers, they presented
engineering problems that required solutions, and the solutions were found.
The stories were rather convincing portrayals of a future technology and were
not moral lessons. The robots were machines and not metaphors.
As a result, the old-fashioned robot story was virtually killed in all science
fiction stories above the comic-strip level. Robots began to be viewed as
machines rather than metaphors by other writers, too. They grew to be commonly
seen as benevolent and useful except when something went wrong, and then as
capable of correction and improvement. Other writers did not quote the Three
Laws—they tended to be reserved for me—but they assumed them, and so did the
readers.
Astonishingly enough, my robot stories also had an important effect on the
world outside.
It is well known that the early rocket-experimenters were strongly influenced
by the science fiction stories of H. a. Wells. In the same way, early robot-
experimenters were strongly influenced by my robot stories, nine of which were
collected in 1950 to make up a book called I, Robot. It was my second
published book and it has remained in print in the four decades since.
Joseph F. Engelberger, studying at Columbia University in the 1950s, came
across I, Robot and was sufficiently attracted by what he read to determine
that he was going to devote his life to robots. About that time, he met George
C. Devol, Jr., at a cocktail party. Devol was an inventor who was also
interested in robots.
Together, they founded the firm of Unimation and set about working out schemes
for making robots work. They patented many devices, and by the mid-1970s, they
had worked out all kinds of practical robots. The trouble was that they needed
computers that were compact and cheap—but once the microchip came in, they had
it. From that moment on, Unimation became the foremost robot firm in the world
and Engelberger grew rich beyond anything he could have dreamed of.
He has always been kind enough to give me much of the credit. I have met other
Plik z chomika:
margozap
Inne pliki z tego folderu:
Isaac Asimov & Janet Asimov - The Norby Chronicles.pdf
(272 KB)
Isaac Asimov & Robert Silverberg - The Positronic Man.pdf
(350 KB)
Isaac Asimov & Robert Silverberg - The Ugly Little Boy.pdf
(702 KB)
Isaac Asimov - Catastrophes.pdf
(800 KB)
Isaac Asimov - Aurora in Four Voices.pdf
(143 KB)
Inne foldery tego chomika:
Abbott, Edwin A
Adams, Douglas
Aesop
Akers, Alan Burt
Alcott, Louisa May
Zgłoś jeśli
naruszono regulamin