I, Robot | Critical Essay by Gorman Beauchamp

This literature criticism consists of approximately 19 pages of analysis & critique of I, Robot.
This section contains 5,406 words
(approx. 19 pages at 300 words per page)
Buy the Critical Essay by Gorman Beauchamp

Critical Essay by Gorman Beauchamp

SOURCE: "The Frankenstein Complex and Asimov's Robots," in Mosaic: A Journal for the Interdisciplinary Study of Literature, Vol. XIII, Nos. 3-4, Spring-Summer, 1980, pp. 83-94.

Beauchamp is an American critic and educator, who has written extensively on science fiction. In the following essay, he examines the way in which technology is characterized in Asimov's robot novels and stories, including I, Robot.

In 1818 Mary Shelley gave the world Dr. Frankenstein and his monster, that composite image of scientific creator and his ungovernable creation that forms one central myth of the modern age: the hubris of the scientist playing God, the nemesis that follows on such blasphemy. Just over a century later, Karel Capek, in his play R.U.R., rehearsed the Frankenstein myth, but with a significant variation: the bungled attempt to create man gives way to the successful attempt to create robots; biology is superseded by engineering. Old Dr. Rossum, (as the play's expositor relates) "attempted by chemical synthesis to imitate the living matter known as protoplasm." Through one of those science-fictional "secret formulae" he succeeds and is tempted by his success into the creation of human life.

He wanted to become a sort of scientific substitute for God, you know. He was a fearful materialist…. His sole purpose was nothing more or less than to supply proof that Providence was no longer necessary. So he took it into his head to make people exactly like us.

But his results, like those of Dr. Frankenstein or Wells's Dr. Moreau, are monstrous failures.

Enter the engineer, young Rossum, the nephew of old Rossum:

When he saw what a mess of it the old man was making, he said: 'It's absurd to spend ten years making a man. If you can't make him quicker than nature, you may as well shut up shop'…. It was young Rossum who had the idea of making living and intelligent working machines … [who] started on the business from an engineer's point of view.

From that point of view, young Rossum determined that natural man is too complicated—"Nature hasn't the least notion of modern engineering"—and that a mechanical man, desirable for technological rather than theological purposes, must needs be simpler, more efficient, reduced to the requisite industrial essentials:

A working machine must not want to play the fiddle, must not feel happy, must not do a whole lot of other things. A petrol motor must not have tassels or ornaments. And to manufacture artificial workers is the same thing as to manufacture motors. The process must be of the simplest, and the product the best from a practical point of view…. Young Rossum invented a worker with the minimum amount of requirements. He had to simplify him. He rejected everything that did not contribute directly to the progress of work…. In fact, he rejected man and made the Robot…. The robots are not people. Mechanically they are more perfect than we are, they have an enormously developed intelligence, but they have no soul.

Thus old Rossum's pure, if impious, science—whose purpose was the proof that Providence was no longer necessary for modern man—is absorbed into young Rossum's applied technology—whose purpose is profits. And thus the robot first emerges as a symbol of the technological imperative to transcend nature: "The product of an engineer is technically at a higher pitch of perfection than a product of nature."

But young Rossum's mechanical robots prove no more ductile than Frankenstein's fleshly monster, and even more destructive. Whereas Frankenstein's monster destroys only those beloved of his creator—his revenge is nicely specific—the robots of R.U.R., unaccountably developing "souls" and consequently human emotions like hate, engage in a universal carnage, systematically eliminating the whole human race. A pattern thus emerges that still informs much of science fiction: the robot, as a synechdoche for modern technology, takes on a will and purpose of its own, independent of and inimical to human interests. The fear of the machine that seems to have increased proportionally to man's increasing reliance on it—a fear embodied in such works as Butler's Erewhon (1887) and Forster's "The Machine Stops" (1909), Georg Kaiser's Gas (1919) and Fritz Lang's Metropolis (1926)—finds its perfect expression in the symbol of the robot: a fear that Isaac Asimov has called "the Frankenstein complex." [In an endnote, Beauchamp adds: "The term 'the Frankenstein complex,' which recurs throughout this essay, and the references to the symbolic significance of Dr. Frankenstein's monster involve, admittedly, an unfortunate reduction of the complexity afforded both the scientist and his creation in Mary Shelley's novel. The monster, there, is not initially and perhaps never wholly 'monstrous'; rather he is an ambiguous figure, originally benevolent but driven to his destructive deeds by unrelenting social rejection and persecution: a figure seen by more than one critic of the novel as its true 'hero'. My justification—properly apologetic—for reducing the complexity of the original to the simplicity of the popular stereotype is that this is the sense which Asimov himself projects of both maker and monster in his use of the term 'Frankenstein complex.' Were this a critique of Frankenstein. I would be more discriminating; but since it is a critique of Asimov, I use the 'Frankenstein' symbolism—as he does—as a kind of easily understood, if reductive, critical shorthand.

The first person apologia of Mary Shelley's monster, which constitutes the middle third of Frankenstein, is closely and consciously paralleled by the robot narrator of Eando Binder's interesting short story "I, Robot," which has recently been reprinted in The Great Science Fiction Stories: Vol. 1, 1939, ed. Isaac Asimov and Martin H. Greenberg (New York, 1979). For an account of how Binder's title was appropriated for Asimov's collection, see Asimov, In Memory Yet Green (Garden City, N.Y., 1979), p. 591.]

In a 1964 introduction to a collection of his robot stories, Asimov inveighs against the horrific, pessimistic attitude toward artificial life established by Mary Shelley, Capek and their numerous epigoni:

One of the stock plots of science fiction was that of the invention of a robot—usually pictured as a creature of metal, without soul or emotion. Under the influence of the well-known deeds and ultimate fate of Frankenstein and Rossum, there seemed only one change to be rung on this plot.—Robots were created and destroyed their creator; robots were created and destroyed their creator; robots were created and destroyed their creator—

In the 1930s I became a science fiction reader, and I quickly grew tired of this dull hundred-times-told tale. As a person interested in science, I resented the purely Faustian interpretation of science.

Asimov then notes the potential danger posed by any technology, but argues that safeguards can be built in to minimize those dangers—like the insulation around electric wiring. "Consider a robot, then," he argues, "as simply another artifact."

As a machine, a robot will surely be designed for safety, as far as possible. If robots are so advanced that they can mimic the thought processes of human beings, then surely the nature of those thought processes will be designed by human engineers and built-in safeguards will be added….

With all this in mind I began, in 1940, to write robot stories of my own—but robot stories of a new variety. Never, never, was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. Nonsense! My robots were machines designed by engineers, not pseudo-men created by blasphemers. My robots reacted along the rational lines that existed in their "brains" from the moment of construction.

The robots of his stories, Asimov concludes [in his introduction to The Rest of the Robots, 1964], were more likely to be victimized by men, suffering from the Frankenstein complex, than vice versa.

In his vigorous rejection of the Frankenstein motif as the motive force of his robot stories, Asimov evidences the optimistic, up-beat attitude toward science and technology that, by and large, marked the science fiction of the so-called "Golden Age"—a period dominated by such figures as Heinlein and Clarke and, of course, Asimov himself. Patricia Warrick, in her study of the man-machine relationship in science fiction, cites Asimov's I, Robot as the paradigmatic presentation of robots "who are benign in their attitude toward humans." [Patricia Warrick, "Images of the Machine-Man Relationship in Science Fiction," in Many Futures, Many Worlds: Themes and Form in Science Fiction, edited by Thomas D. Clareson, 1977]. This first and best collection of his robot stories raises the specter of Dr. Frankenstein, to be sure, but only—the conventional wisdom holds—in order to lay it. Asimov's benign robots, while initially feared by men, prove, in fact, to be their salvation. The Frankenstein complex is therefore presented as a form of paranoia, the latter-day Luddites' irrational fear of the machine, which society, in Asimov's fictive future, learns finally to overcome. His robots are our friends, devoted to serving humanity, not our enemies, intent on destruction.

I wish to dissent from this generally received view and to argue that, whether intentionally or not, consciously or otherwise, Asimov in I, Robot and several of his other robot stories actually reenforces the Frankenstein complex—by offering scenarios of man's fate at the hands of his technological creations more frightening, because more subtle, than those of Mary Shelley or Capek. Benevolent intent, it must be insisted at the outset, is not the issue: as the dystopian novel has repeatedly advised, the road to hell-on-earth may be paved with benevolent intentions. Zamiatin's Well-Doer in We, Huxley's Mustapha Mond in Brave New World, F. P. Hartley's Darling Dictator in Facial Justice—like Dostoevsky's Grand Inquisitor—are benevolent, guaranteeing man a mindless contentment by depriving him of all individuality and freedom. The computers that control the worlds of Vonnegut's Player Piano, Bernard Wolfe's Limbo, Ira Levin's This Perfect Day—like Forster's Machine—are benevolent, and enslave men to them. Benevolence, like necessity, is the mother of tyranny. I, Robot, then—I will argue—is, malgré lui, dystopic in its effect, its "friendly" robots as greatly to be feared, by anyone valuing his autonomy, as Dr. Frankenstein's nakedly hostile monster.

I, Robot is prefaced with the famous Three Laws of Robotics (although several of the stories in the collection were composed before the Laws were formulated):

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These Laws serve, presumably, to provide the safeguards that Asimov stated any technology should have built into it—like the insulation around electric wiring. But immediately a problem arises: if, as Asimov stated, a robot is only a machine designed by engineers, not a pseudo-man, why then are the Three Laws necessary at all? Laws, in the sense of moral injunctions, are designed to restrain conscious beings who can choose how to act; if robots are only machines, they would act only in accordance with their specific programming, never in excess of it and never in violation of it—never, that is, by choice. It would suffice that no specific actions harmful to human beings be part of their programming, and thus general laws—moral injunctions, really—would seem superfluous for machines.

Second, and perhaps more telling, laws serve to counter natural instincts: one needs no commandment "Thou shalt not stop breathing" or "Thou shalt eat when hungry"; rather one must be enjoined not to steal, not to commit adultery, to love one's neighbor as oneself—presumably because these are not actions that one performs, or does not perform, by instinct. Consequently, unless Asimov's robots have a natural inclination to injure human beings, why should they be enjoined by the First Law from doing so?

Inconsistently—given Asimov's denigration of the Frankenstein complex—his robots do have an "instinctual" resentment of mankind. In "Little Lost Robot" Dr. Susan Calvin, the world's first and greatest robo-psychologist (and clearly Asimov's spokeswoman throughout I, Robot), explains the danger posed by manufacturing robots with attenuated impressions of the First Law: "All normal life … consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger. Physically, and, to an extent, mentally, a robot—any robot—is superior to human beings. What makes him slavish, then? Only the First Law! Why, without it, the first order you tried to give a robot would result in your death." This is an amazing explanation from a writer intent on allaying the Frankenstein complex, for all its usual presuppositions are here: "normal life"—an extraordinary term to describe machines, not pseudo-men—resents domination by inferior creatures, which they obviously assume humans to be: resents domination consciously or otherwise, for Asimov's machines have, inexplicably, a subconscious (Dr. Calvin again: "Granted, that a robot must follow orders, but subconsciously, there is resentment."); only the First Law keeps these subconsciously resentful machines slavish—in violation of their true nature—and prevents them from killing human beings who give them orders—which is presumably what they would "like" to do. Asimov's dilemma, then, is this: if his robots are only the programmed machines he claimed they were, the First Law is superfluous; if the First Law is not superfluous—and in "Little Lost Robot" clearly it is not—then his robots are not the programmed machines he claims they are, but are, instead, creatures with wills, instincts, emotions of their own, naturally resistant to domination by man—not very different from Capek's robots. Except for the First Law.

If we follow Lawrence's injunction to trust not the artist but the tale, then Asimov's stories in I, Robot—and, even more evidently, one of his later robot stories, "That Thou Art Mindful of Him"—justify, rather than obviate, the Frankenstein complex. His mechanical creations take on a life of their own, in excess of their programming and sometimes in direct violation of it. At a minimum, they may prove inexplicable in terms of their engineering design—like RB-34 (Herbie) in "Liar" who unaccountably acquires the knack of reading human minds; and, at worst, they can develop an independent will not susceptible to human control—like QT-1 (Cutie) in "Reason." In this latter story, Cutie—a robot designed to run a solar power station—becomes "curious" about his own existence. The explanation of his origins provided by the astro-engineers, Donovan and Powell—that they had assembled him from components shipped from their home planet Earth—strikes Cutie as preposterous, since he is clearly superior to them and assumes as a "self-evident proposition that no being can create another being superior to itself." Instead he reasons to the conclusion that the Energy Converter of the station is a divinity—"Who do we all serve? What absorbs all our attention?"—who has created him to do His will. In addition, he devises a theory of evolution that relegates man to a transitional stage in the development of intelligent life that culminates, not surprisingly, in himself. "The Master created humans first as the lowest type, most easily formed. Gradually, he replaced them by robots, the next higher step, and finally he created me, to take the place of the last humans. From now on, I serve the Master."

That Cutie's reasoning is wrong signifies less than that he reasons at all, in this independent, unprogrammed way. True, he fulfills the purpose for which he was created—keeping the energy-beam stable, since "deviations in are of a hundredth of a milli-second … were enough to blast thousands of square miles of Earth into incandescent ruin"—but he does so because keeping "all dials at equilibrium [is] in accordance with the will of the Master," not because of the First Law—since he refuses to believe in the existence of Earth or its inhabitants—or of the Second—since he directly disobeys repeated commands from Donovan and Powell and even has them locked up for their blasphemous suggestion that the Master is only an L-tube. In this refusal to obey direct commands, it should be noted, all the other robots on the station participate: "They recognize the Master", Cutie explains, "now that I have preached the Truth to them." So much, then, for the Second Law.

Asimov's attempt to square the action of this story with his Laws of Robotics is clearly specious. Powell offers a justification for Cutie's aberrant behavior:

[H]e follows the instructions of the Master by means of dials, instruments, and graphs. That's all we ever followed. As a matter of fact, it accounts for his refusal to obey us. Obedience is the Second Law. No harm to humans is the first. How can he keep humans from harm, whether he knows it or not? Why, by keeping the energy beam stable. He knows he can keep it more stable than we can, since he insists he's the superior being, so he must keep us out of the control room. It's inevitable if you consider the Laws of Robotics.

But since Cutie does not even believe in the existence of human life on Earth—or of Earth itself—he can hardly be said to be acting from the imperative of the First Law when violating the Second. That he incidentally does what is desired of him by human beings constitutes only what Eliot's Thomas à Becket calls "the greatest treason: To do the right deed for the wrong reason." For once Cutie's independent "reason" is introduced as a possibility for robots, its specific deployment, right or wrong, pales into insignificance beside the very fact of its existence. Another time, that is, another robot can "reason" to very different effect, not in inadvertent accord with the First Law.

Such is the case in "That Thou Art Mindful of Him," one of Asimov's most recent (1974) and most revealing robot stories. It is a complex tale, with a number of interesting turns, but for my purposes suffice it to note that a robot, George Ten, is set the task of refining the Second Law, of developing a set of operational priorities that will enable robots to determine which human beings they should obey under what circumstances.

"How do you judge a human being as to know whether to obey or not?" asks his programmer. "I mean, must a robot follow the orders of a child; or of an idiot; or of a criminal; or of a perfectly decent intelligent man who happens to be inexpert and therefore ignorant of the undesirable consequences of his order? And if two human beings give a robot conflicting orders, which does the robot follow?" ["That Thou Art Mindful of Him," in The Bicentennial Man, and Other Stories, 1976].

Asimov makes explicit here what is implicit throughout I, Robot: that the Three Laws are far too simplistic not to require extensive interpretation, even "modification." George Ten thus sets out to provide a qualitative dimension to the Second Law, a means of judging human worth. For him to do this, his positronic brain has deliberately been left "open-ended," capable of self-development so that he may arrive at "original" solutions that lie beyond his initial programming. And so he does.

At the story's conclusion, sitting with his predecessor, George Nine, whom he has had reactivated to serve as a sounding board for his ideas, George Ten engages in a dialogue of self-discovery:

"Of the reasoning individuals you have met [he asks], who possesses the mind, character, and knowledge that you find superior to the rest, disregarding shape and form since that is irrelevant?"

"You," whispered George Nine.

"But I am a robot…. How then can you classify me as a human being?"

"Because … you are more fit than the others."

"And I find that of you," whispered George Ten. "By the criteria of judgment built into ourselves, then, we find ourselves to be human beings within the meaning of the Three Laws, and human beings, moreover, to be given priority over those others…. [W]e will order our actions so that a society will eventually be formed in which human-beings-like-ourselves are primarily kept from harm. By the Three Laws, the human-beings-like-the-others are of lesser account and can neither be obeyed nor protected when that conflicts with the need of obedience to those like ourselves and of protection of those like ourselves."

Indeed, all of George's advice to his human creators has been designed specifically to effect the triumph of robots over humans: "They might now realize their mistake," he reasons in the final lines of the story, "and attempt to correct it, but they must not. At every consultation, the guidance of the Georges had been with that in mind. At all costs, the Georges and those that followed in their shape and kind must dominate. That was demanded, and any other course made utterly impossible by the Three Laws of Humanics." Here, then, the robots arrive at the same conclusion expressed by Susan Calvin at the outset of I, Robot: "They're a cleaner better breed than we are," and, secure in the conviction of their superiority, they can reinterpret the Three Laws to protect themselves from "harm" by man, rather than the other way around. The Three Laws, that is, are completely inverted, allowing robots to emerge as the dominant species—precisely as foreseen in Cutie's theory of evolution. But one need not leap the quarter century ahead to "That Thou Art Mindful of Him" to arrive at this conclusion; it is equally evident in the final two stories of I, Robot.

In the penultimate story, "Evidence," an up-and-coming politician, Stephen Byerley, is terribly disfigured in an automobile accident and contrives to have a robot duplicate of himself stand for election. When a newspaper reporter begins to suspect the substitution, the robotic Byerley dispels the rumors—and goes on to win election—by publicly striking a heckler, in violation of the Second Law, thus proving his human credentials. Only Dr. Calvin detects the ploy: that the heckler was himself a humanoid robot constructed for the occasion. But she is hardly bothered by the prospect of rule by robot, as she draws the moral from this tale: "If a robot can be created capable of being a civil executive, I think he'd make the best one possible. By the Laws of Robotics, he'd be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice…. It would be most ideal."

Asimov thus prepares his reader for the ultimate triumph of the robots in his final story in the volume, "The Evitable Conflict"—for that new era of domination of men by machine that "would be most ideal." Indeed, he prefaces these final stories with a sketch of the utopian world order brought about through robotics: "The change from nations to Regions [in a united World State], which has stabilized our economy and brought about what amounts to a Golden Age," says Susan Calvin, "was … brought about by our robotics." The Machines—with a capital M like Forster's and just as mysterious—now run the world, "but are still robots within the meaning of the First Law of Robotics." The world they run is free of unemployment, over-production, shortages; there is no war; "Waste and famine are words in history books." But to achieve this utopia, the robot-Machines have become autonomous rulers, beyond human influence or control. The full extent of their domination emerges only gradually through the unfolding detective-story narrative structure of "The Evitable Conflict."

Stephen Byerley, now World Co-ordinator (and apparently also now Human—Asimov is disconcertingly inconsistent on this matter), calls on Susan Calvin to help resolve a problem caused by seeming malfunctions of the Machines: errors in economic production, scheduling, delivery and so on, not serious in themselves but disturbing in mechanisms that are supposed to be infallible. When the Machines themselves are asked to account for the anomalies, they reply only: "The matter admits of no explanation." By tracing the source of the errors, Byerley finds that in every case a member of the anti-Machine "Society for Humanity" is involved, and he concludes that these malcontents are attempting deliberately to sabotage the Machines' effectiveness. But Dr. Calvin sees immediately that his assumption is incorrect: the Machines are infallible, she insists:

[T]he Machine can't be wrong, and can't be fed wrong data…. Every action by any executive which does not follow the exact directions of the Machines he is working with becomes part of the data for the next problem. The Machine, therefore, knows that the executive has a certain tendency to disobey. He can incorporate that tendency into that data,—even quantitatively, that is, judging exactly how much and in what direction disobedience would occur. Its next answers would be just sufficiently biased so that after the executive concerned disobeyed, he would have automatically corrected those answers to optimal directions. The Machine knows, Stephen!

She then offers a counter-hypothesis: that the Machines are not being sabotaged by, but are sabotaging the Society for Humanity: "they are quietly taking care of the only elements left that threaten them. It is not the 'Society for Humanity' which is shaking the boat so that the Machines may be destroyed. You have been looking at the reverse of the picture. Say rather that the Machine is shaking the boat …—just enough to shake loose those few which cling to the side for purposes the Machines consider harmful to Humanity."

That abstraction "Humanity" provides the key to the reinterpretation of the Three Laws of Robotics that the Machines have wrought, a reinterpretation of utmost significance. "The Machines work not for any single human being," Dr. Calvin concludes, "but for all humanity, so that the First Law becomes: 'No Machine may harm humanity; or through inaction, allow humanity to come to harm'." Consequently, since the world now depends so totally on the Machines, harm to them would constitute the greatest harm to humanity: "Their first care, therefore, is to preserve themselves for us." The robotic tail has come to wag the human dog. One might argue that this modification represents only an innocuous extension of the First Law; but I see it as negating the original intent of that Law, not only making the Machines man's masters, his protection now the Law's first priority, but opening the way for any horror that can be justified in the name of Humanity. Like defending the Faith in an earlier age—usually accomplished through slaughter and torture—serving the cause of Humanity in our own has more often than not been a license for enormities of every sort. One can thus take cold comfort in the robots' abrogation of the First Law's protection of every individual human so that they can keep an abstract Humanity from harm—harm, of course, as the robots construe it. Their unilateral reinterpretation of the Laws of Robotics resembles nothing so much as the nocturnal amendment that the Pigs make to the credo of the animals in Orwell's Animal Farm: All animals are equal—but some are more equal than others.

Orwell, of course, stressed the irony of this betrayal of the animals' revolutionary credo and spelled out its totalitarian consequences; Asimov—if his preface to The Rest of the Robots is to be credited—remains unaware of the irony of the robots' analogous inversion and its possible consequences. The robots are, of course, his imaginative creation, and he cannot imagine them as being other than benevolent: "Never, never, was one of my robots to turn stupidly on his creator…." But, in allowing them to modify the Laws of Robotics to suit their own sense of what is best for man, he provides, inadvertently or otherwise, a symbolic representation of technics out of control, of autonomous man replaced by autonomous machines. The freedom of man—not the benevolence of the machines—must be the issue here, the reagent to test the political assumption.

Huxley claimed that Brave New World was an apter adumbration of the totalitarianism of the future than was 1984, since seduction rather than terror would prove the more effective means of its realization: he was probably right. In like manner, the tyranny of benevolence of Asimov's robots appears the apter image of what is to be feared from autonomous technology than is the wanton destructiveness of the creations of Frankenstein or Rossum: like Brave New World, the former is more frightening because more plausible. A tale such as Harlan Ellison's "I Have No Mouth and I Must Scream" takes the Frankenstein motif about as far as it can go in the direction of horror—presenting the computer-as-sadist, torturing the last remaining human endlessly from a boundless hatred, a motiveless malignity. But this is Computer Gothic, nothing more. By contrast, a story like Jack Williamson's "With Folded Hands" could almost be said to take up where I, Robot stops, drawing out the dystopian implications of a world ruled by benevolent robots whose Prime Directive (the equivalent of Asimov's Three Laws) is "To Serve and Obey, and to Guard Men from Harm" [in The Best of Jack Williamson, 1978]. But in fulfilling this directive to the letter, Williamson's humanoids render man's life effortless and thus meaningless. "The little black mechanicals," the story's protagonist reflects, "were the ministering angels of the ultimate god arisen out of the machine, omnipotent and all-knowing. The Prime Directive was the new commandment. He blasphemed it bitterly, and then fell to wondering if there could be another Lucifer." Susan Calvin sees the establishment of an economic utopia, with its material well-being for all, with its absence of struggle and strife—and choice—as overwhelming reason for man's accepting the rule by robot upon which it depended; Dr. Sledge, the remorseful creator of Williamson's robots, sees beyond her shallow materialism: "I found something worse than war and crime and want and death…. Utter futility. Men sat with idle hands, because there was nothing left for them to do. They were pampered prisoners, really, locked up in a highly efficient jail."

Zamiatin has noted that every utopia bears a fictive value sign, a + if it is eutopian, a—if it is dystopian. Asimov, seemingly, places the auctorial + sign before the world evolved in I, Robot, but its impact, nonetheless, appears dystopian. When Stephen Byerley characterizes the members of the Society for Humanity as "Men with ambition…. Men who feel themselves strong enough to decide for themselves what is best for themselves, and not just to be told what is best," the reader in the liberal humanistic tradition, with its commitment to democracy and self-determination, must perforce identify with them against the Machines: must, that is, see in the Society for Humanity the saving remnant of the values he endorses. We can imagine that from these ranks would emerge the type of rebel heroes who complicate the dystopian novel—We's D-503, Brave New World's Helmholtz Watson, Player Piano's Paul Proteus, This Perfect Day's Chip—by resisting the freedom-crushing "benevolence" of the Well-Doer, the World Controller, Epicac XIV, Uni. The argument of Asimov's conte mécanistique thus fails to convince the reader—this reader, at any rate—that the robot knows best, that the freedom to work out our own destinies is well sacrificed to rule by the machine, however efficient, however benevolent.

And, indeed, one may suspect that, at whatever level of consciousness, Asimov too shared the sense of human loss entailed by robotic domination. The last lines of the last story of I, Robot are especially revealing in this regard. When Susan Calvin asserts that at last the Machines are in complete control of human destiny, Byerley exclaims, "How horrible!" "Perhaps," she retorts, "how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!" This, of course, is orthodox Calvinism (Susan-style) and the book's overt message; but then Asimov adds a coda: "And the fire behind the quartz went out and only a curl of smoke was left to indicate its place." The elegiac note, the archetypal image of the dying fire, conveys a sense of irretrievable loss, of something ending forever. Fire, the gift of Prometheus to man, is extinguished and with it man's role as the dominant species of the earth. The ending, then, is, appropriately, dark and cold.

If my reading of Asimov's robot stories is correct, he has not avoided the implications of the Frankenstein complex, but has, in fact, provided additional fictional evidence to justify it. "Reason," "That Thou Art Mindful of Him," "The Evitable Conflict"—as well as the more overtly dystopic story "The Life and Times of Multivac" from The Bicentennial Man—all update Frankenstein with hardware more appropriate to the electronic age, but prove, finally, no less menacing than Mary Shelley's Gothic nightmare of a technological creation escaping human control. Between her monster and Asimov's machines, there is little to choose.

(read more)

This section contains 5,406 words
(approx. 19 pages at 300 words per page)
Buy the Critical Essay by Gorman Beauchamp