This section contains 3,866 words
(approx. 13 pages at 300 words per page)
Critical Essay by Jean Fiedler and Jim Mele
SOURCE: "A New Kind of Machine: The Robot Stories," in Isaac Asimov, Frederick Ungar, 1982, pp. 27-39.
Fiedler is an educator and author of children's and young adult books. Mele is a poet, editor, and journalist. In the following essay, they examine the development of robots and robotics in I, Robot, and explore some of the ethical consequences of Asimov's Three Laws of Robotics.
There was a time when humanity faced the universe alone and without a friend. Now he has creatures to help him; stronger creatures than himself, more faithful, more useful, and absolutely devoted to him. Mankind is no longer alone.
Of all his creations, Asimov himself says, "If in future years, I am to be remembered at all, it will be for (the) three laws of robotics."
These three laws, deceptively simple at first glance, have led to a body of work—twenty-two short stories, two novels, one novella—that has permanently changed the nature of robots in science fiction. Far from confining Asimov, these laws sparked his imagination, provoking inventive speculation on a future technology and its effect on humanity.
As a science fiction reader in the thirties, Asimov says he resented the Frankenstein concept, then rampant in science fiction, of the mechanical man that ultimately destroys its master. Annoyed with what he perceived as a purely Faustian interpretation of science, early in his career he decided to try his hand at writing stories about a new kind of robot, "machines designed by engineers, not pseudo men created by blasphemers."
"Robbie," his first robot story, published in 1940 unveils a machine with a "rational brain," a machine created solely for the use of mankind and equipped with three immutable laws which it cannot violate without destroying itself.
These laws, essential to Asimov's conception of the new robot he dubbed the Three Laws of Robotics: First Law—A robot may not injure a human being or through inaction allow a human being to come to harm; Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; Third Law—A robot must protect its own existence if such protection does not conflict with the First and Second Laws.
Despite their apparent simplicity these laws are among Asimov's most significant contributions to a new kind of science fiction. Using the Three Laws as the premise for all robotic action, he proceeded to write a series of stories and later two novels that presented the relationship of technology and humanity in a new light.
When "Robbie" first appeared in Super Science Stories, it is unlikely that any reader would have been able to discern the truly revolutionary nature of this elementary robot. "Robbie" is an uncomplicated, even naive story of a nonvocal robot who was built to be a nursemaid. From the beginning, Asimov wages his own war on the Frankenstein image of the new robot. Gloria, the child, loves Robbie as a companion and playmate. Her mother, Grace Weston, dislikes and distrusts the robot, whereas her father, George Weston, acknowledges the Three Laws of Robotics and sees the robot as a useful tool that can never harm his child.
In spite of wooden characters and a predictable plot, this early robot story is the first step in Asimov's investigation of the potential inherent in the Three Laws and the, as yet unforeseen, ramifications of his new robotic premise.
In the stories that followed "Robbie," it seems clear that Asimov's scientific background suggested a technique that he could use to investigate and exploit this new character, the non-Frankenstein robot. Like a scientist working in the controlled environment of a laboratory, Asimov took the Three Laws as an inviolate constant and logically manipulated them to produce unforeseen results, expanding his robotic characters and his own fiction-making ability along the way.
In a sense the Three Laws are the plot in Asimov's early robot stories. By allowing the actions of the various robots seemingly to contradict one of the laws, Asimov creates tension which he then releases by letting his human characters discover a logical explanation, that is, one that works within the framework of the robotic laws.
This is the real difference between the Robot stories and the Foundation series that he was working on at the same time. In the latter he writes as a historian paralleling Gibbon's Decline and Fall of the Roman Empire. The stories are sequential, each new story building on its predecessors to present an historical context. He was able to develop the Robot stories in a very different manner, free to add new elements without regard for temporal continuity.
Using his formula, Asimov followed Robbie with eleven more robot stories, all published in various science fiction pulp magazines, the best of which were collected under the title, I, Robot and published by Gnome Press in 1950.
An Excerpt from I, Robot
THE THREE LAWS OF ROBOTICS
1—A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
HANDBOOK OF ROBOTICS,
56TH EDITION, 2058 A.D.
Isaac Asimov, in his I, Robot Doubleday, 1963
In the I, Robot stories, Asimov introduces three central human characters to link the stories together as well as bringing in a number of concepts that quickly become central to this expanding robotic world. Susan Calvin, a robot psychologist or roboticist, is the main character in some stories. She has an intuitive, almost uncanny understanding of the thought processes of Asimov's peculiar robots. When the stories leave the Earth's surface, two new characters take over—Gregory Powell and Mike Donovan, troubleshooters who field-test new robots. Susan Calvin remains behind to record their exploits for curious reporters and historians. All three are employees of U.S. Robots and Mechanical Men, the sole manufacturers of Asimovian robots.
By the second story in I, Robot, "Runaround," Asimov has invented a name for the phenomenon that sets his robots apart from all their predecessors—the positronic brain, a "brain of platinum-iridium sponge … (with) the 'brain paths' … marked out by the production and destruction of positrons." While Asimov has readily admitted, "I don't now how its done," one fact quickly becomes clear—his positronic brain gives all of his robots a uniquely human cast.
In "Runaround" Powell and Donovan have been sent to Mercury to report on the advisability of reopening the Sunside Mining Station wit robots. Trouble develops when Speedy (SPD-13), who has been designed specifically for Mercury's environs is sent on a simple mission essential both to the success of the expedition and to their own survival.
Instead of heading straight for the designated target, a pool of selenium, Speedy begins to circle the pool, spouting lines from Gilbert and Sullivan, and challenging Powell and Donovan to a game of catch.
At first glance it seems that Speedy is drunk. However, never doubting that the Three Laws continue to govern the robot's behavior, as bizarre as it is, the two men proceed to test one hypothesis after another until ultimately they and hit upon a theory that explains Speedy's ludicrous antics and "saves the day."
"Reason" presents the two engineers with an unexpectedly complex robot, the first one who has ever displayed curiosity about its own existence. Cutie (QT-1) has been built to replace human executives on a distant space station which beams solar energy back to Earth. A skeptic, Cutie cannot accept Powell's explanation of the space station's purpose. Instead, he develops his own "logical" conception of a universe that does not include Earth, human creatures, or anything beyond the space station.
Beginning with the assumption, "I, myself, exist because I think," Cutie deduces that the generator of the space station is "The Master," that he, QT-1, is his prophet, and that Donovan and Powell are inferior stopgap creations that preceded him.
He tells the two that their arguments have no basis while his are founded on Truth,
Because I, a reasoning being, am capable of deducing Truth from a priori Causes. You, being intelligent, but unreasoning, need an explanation of existence supplied to you, and this the Master did. That he supplied you with these laughable ideas of far-off worlds and peoples is, no doubt, for the best. Your minds are probably too coarsely grained for absolute Truth.
Although in the end Asimov still uses the Laws to explain Cutie's behavior, for the first time the robot is no longer merely a device to illustrate the workings of his Three Laws. It seems apparent that Asimov in his manipulation went a step further in the characterization of this robot. Cutie is not a simple tool; he is curious, intuitive, considerate of his "inferiors," Donovan and Powell, humoring their "misplaced notions," and ultimately but unconsciously fulfilling the requirements of the First Robotic Law—to protect human life.
When Asimov first began to write about robots, he knew what he did not want to perpetuate. Now with Cutie's creation, he began to see the real ramifications of robots who must obey the Three Laws. This new technology—robotics—is softened by human moral and ethical qualities.
A robot unintentionally endowed with the ability to read minds is the hero of "Liar." Of course this ability has profound effects on the robot's interpretation of the Three Laws, an interpretation so logical, so simple that it is overlooked by everyone, including the famed robot psychologist, Susan Calvin. Herbie (RB-34) not only reads minds, but he must consider human psychic well-being in all his actions.
One interesting sidelight to "Liar" is an unusual aspect of Herbie's reading habits. Perhaps revealing Asimov-the scientist's own interest in that logically suspect form, fiction, Herbie turns his nose up at scientific texts:
"Your science is just a mass of collected data plastered together by make-shift theory—and all so incredibly simple, that it's hardly worth bothering about.
"It's your fiction that interests me. Your studies of the interplay of human motives and emotions …
"I see into minds, you see," the robot continued, "and you have no idea how complicated they are. I can't begin to understand everything because my own mind has so little in common with them—but I try, and your novels help."
This cavalier attitude towards the icons of science fiction is common in Asimov's early robot stories, giving them a refreshing humorous character. The vision of Speedy declaiming Gilbert and Sullivan, Cutie teaching subservient robots to "salaam," or Herbie reading romantic prose is an endearing touch that banishes all Frankenstein overtones.
Working within self-imposed limits often gives rise to the temptation to transgress these limits even if briefly. In "Little Lost Robot" Asimov succumbs to the temptation to tamper with the First Law. With his background in biblical studies, he inevitably finds that such a transgression of absolute law can only lead to disaster. He creates a robot who, while still forbidden to harm a human being, has no compulsion to prevent through inaction a human from coming to harm. This modification is performed only because of dire need and over the strenuous objections of the roboticists. His forbidden apple tasted, Asimov is content to return to the invariable perimeter of his Three Laws in the rest of the stories.
By the time he gets to "Escape," Asimov has realized that the emotional characteristics of the robotic personality by the injunctions of the Three Laws have become in unexpected ways the robot's greatest strength.
In "Escape," the largest positronic brain ever built (so large that it is housed in a room rather than in a humanoid body) is asked to solve a problem that has already destroyed a purely functional computer. Susan Calvin and the others realize that the problem of developing a hyperspace engine must involve some kind of dilemma that the purely rational computer cannot overcome.
Endowed with the flexibility of a personality, even an elementary personality, the Brain ultimately does solve the problem but not without a curiously human-like reaction.
The nub of the problem is that hyperspace travel demands that human life be suspended for a brief period, an unthinkable act expressly forbidden by the First Law. The Brain, although able to see beyond the temporary nature of the death, is unbalanced by the conflict. Whereas a human might go on a drunken binge, the Brain escapes the pressure of his dilemma by seeking refuge in humor and becoming a practical joker. He sends Powell and Donovan off in a spaceship without internal controls, stocked only with milk and beans. He also arranges an interesting diversion for the period of their temporary death—he sends them on an hallucinatory trip to the gates of Hell.
"Evidence" presents a situation in which Stephen Byerley, an unknown, is running for public office, opposed by political forces that accuse him of being a robot, a humanoid robot. The story unfolds logically with the Three Laws brought into play apparently to substantiate the opposition's claim. Waiting for the proper dramatic moment, Byerley disproves the charges by disobeying the First Law, And ultimately with a climax worthy of O. Henry, Susan Calvin confronts Byerley, leaving the reader to wonder, "Is he, or isn't he?"
In a sense this is the most sophisticated story in I, Robot. As a scientist accustomed to the sane and ordered world of the laboratory, Asimov's tendency until now has been to tie together all the loose strands. In "Evidence" he leaves his reader guessing. and this looser, more subtle technique makes the story especially memorable.
The final story in the I, Robot collection, "The Evitable Conflict," takes place in a world divided into Planetary Regions and controlled by machines. In this story the interpretation of the First Law takes on a dimension so broad that it can in effect be considered almost a nullification of the edict that a machine may not harm a human being. When Susan Calvin is called in by the World Coordinator, the same Stephen Byerley we have met in "Evidence," to help determine why errors were occurring throughout the regions in the world's economy, the indications were that the machines, the result of complicated calculations involving the most complex positronic brain yet, were working imperfectly. All four machines, one handling each of the Planetary Regions, were yielding imperfect results, and Byerley saw that the end of humanity was a frightening consequence. Although these errors have led to only minor economic difficulties, Byerley fears, "such small unbalances in the perfection of our system of supply and demand … may be the first step towards the final war."
Calvin, with her intimate knowledge of robot psychology, discerns that the seeming difficulty is due to yet another interpretation of the First Law. In this world of the future, the machines work not for any single human being but for all mankind, so the First Law becomes, "No machine may harm humanity or through inaction allow humanity to come to harm."
Because economic dislocations would harm humanity and because destruction of the machines would cause economic dislocations, it is up to the machines to preserve themselves for the ultimate good of humanity even if a few individual malcontents are harmed.
Asimov seems to be saying through Susan Calvin that mankind has never really controlled its future: "It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate and the fortunes of war. Now the machines understand them; and no one can stop them, since the machines will deal with them as they are dealing with the society—having as they do the greatest of weapons at their disposal, the absolute control of the economy."
In our time we have heard the phrase, "The greatest good for the greatest number," and seen sociopolitical systems that supposedly practice it. But men, not machines, have been in control. As Susan Calvin says in the year 2052, "For all time, all conflicts are finally evitable. Only the machines from now on are inevitable."
Perhaps Asimov realized that he had, following his ever logical extensions of the Three Laws, gone the full robotic circle and returned his "new" robots to the Faustian mold. Although benign rulers, these machines were finally beyond their creators' control, a situation just as chilling as Frankenstein destroying its creator and just as certain to strengthen antitechnology arguments.
Having foreseen the awesome possibility, Asimov leaves this machine-controlled world, to return to it only one more time in 1974.
The I, Robot collection, one of two books published by Asimov in 1950, was an auspicious debut for a writer whose name would become one of the most widely recognized in contemporary science fiction. As well as reaching a new audience, I, Robot quickly came to be considered a classic, a standard against which all other robot tales are measured.
After I, Robot, Asimov wrote only one more short robot story—"Satisfaction Guaranteed"—before his first robot novel in 1953. The novel, called Caves of Steel, was followed by five more short stories and in 1956 by the final, at least to date, robot novel, The Naked Sun.
Including the six short stories and the two novels, as well as two early stories which predate the Three Laws, the collection The Rest of the Robots was issued by Doubleday in 1964. Although not truly "the rest" (Asimov has written at least five later stories), together with I, Robot, it forms the major body of Asimov's robot fiction.
While the two novels in The Rest of the Robots represent the height of Asimov's robot creations, the quality of the short stories is quite uneven and most seem to have been included only for the sake of historical interest. Three stories, however, "Satisfaction Guaranteed," "Risk," and "Galley Slave" do stand out.
Although not one of Asimov's most elegant stories, "Satisfaction Guaranteed" presents still another unexpected interpretation of the robotic laws.
Tony (TN-3) is a humanoid robot placed as an experiment in the home of Claire Belmont, an insecure, timid woman who feels that she is hindering her husband's career. Hoping to ease the prevalent fear of robots, U.S. Robots has designed Tony as a housekeeper. They hope that if the experiment is successful in the Belmont household, it will lead to the acceptance of robots as household tools.
While Larry Belmont, Claire's husband, is in Washington to arrange for legal government-supervised tests (a simple device on Asimov's part to leave Claire and the robot sequestered together) Claire experiences a variety of emotions ranging from fear to admiration and finally to something akin to love.
In the course of his household duties, Tony recognizes that Claire is suffering psychological harm through her own sense of inadequacy. Broadening the provision of the First Law to include emotional harm, he makes love to her in a situation he contrives to strengthen her self-image.
Despite its lack of subtlety and polish, "Satisfaction Guaranteed" presents a loving, even tender robot that paves the way for Daneel Olivaw, the humanoid robot investigation in the novels.
In "Risk" an experimental spaceship with a robot at the controls is for some unknown reason not functioning as it was designed to do; a disaster of unknown proportions is imminent. While assembled scientists agree that someone or something must board the ship, find out what has gone wrong, and deactivate the ship's hyperdrive, Susan Calvin refuses to send one of her positronic robots and suggests instead a human engineer, Gerald Black, a man who dislikes robots.
Not because of great physical danger but because there is a frightening possibility of brain damage, Black angrily refuses. Despite the danger that Black could return "no more than a hunk of meat who could make [only] crawling motions," Calvin contends that her million-dollar robots are too valuable to risk.
Threatened with court-martial and imprisonment on Mercury, Black finally boards the ship and discovers what went wrong. Returning a hero, Black is enraged that a human could be risked instead of a robot and vows to destroy Calvin and her robots by exposing to the universe the true story of Calvin's machinations.
With a neat twist displaying that Calvin's understanding of humans is as penetrating as her vision of robots, she reveals that she has manipulated Black as adroitly as she does her mechanical men. She chose him for the mission precisely because he disliked robots and "would, therefore, be under no illusion concerning them." He was led to believe that he was expendable because Calvin felt that his anger would override his fear.
Perhaps Asimov was beginning to fear that his readers had grown to accept robots as totally superior to humans, a condition that could only lead to a predictable and constricting science fiction world. Superior robots would, without exception, be expected to solve every problem in every story for their inferior creators. In "Risk," through Susan Calvin he reminds Black and all other myopic humans of the limits of robot intelligence when compared to the boundless capacity of the human mind:
Robots have no ingenuity. Their minds are finite and can be calculated to the last decimal. That, in fact, is my job.
Now if a robot is given an order, a precise order, he can follow it. If the order is not precise, he cannot correct his own mistake without further orders…. "Find out what's wrong" is not an order you can give to a robot; only to a man. The human brain, so far at least, is beyond calculation.
"Galley Slave," the last short story in The Rest of the Robots, marks yet another change in Asimov's attitude towards robot technology.
Easy (EZ-27), a robot designed to perform the mental drudgery that writers and scholars must endure when preparing manuscripts for the printer, is rented by a university to free professors from proofreading galleys and page proofs.
Easy performs his duties perfectly until he makes a number of subtle changes in a sociology text which, strangely enough, was written by the one faculty member opposed to robots.
The changes, undetected until the text has been printed and distributed, destroy the author's career, and the result is a $750,000 suit against U.S. Robots. Susan Calvin, as always, is certain that the errors are the result of human meddling and not robotic malfunction.
In every other case Asimov has chided shortsighted people for refusing to allow robots to free them from menial work. Now as a writer with technology encroaching on his own domain, Asimov's characterization of the antirobot argument is much more sympathetic than ever before.
Explaining his motives to Susan Calvin, the person responsible for Easy's misuse says,
For two hundred and fifty years, the machine has been replacing Man and destroying the handcraftsman…. A book should take shape in the hands of the writer. One must actually see the chapters grow and develop. One must work and re-work and watch the changes take place beyond the original concept even. There is taking the galleys in hand and seeing how the sentences look in print and molding them again. There are a hundred contacts between a man and his work at every stage of the game—and the contact itself is pleasurable and repays a man for the work he puts into his creation more than anything else could. Your robot would take all that away.
Foreshadowing the two novels, "Galley Slave" reveals an Asimov now wary of overreliance on robotic labor.
This section contains 3,866 words
(approx. 13 pages at 300 words per page)