Society featured

Frankenstein: Giving Voice to the Monster

July 10, 2017

Frankenstein's monster: misshapen creature, prone to confusion and violent outbursts

Frankenstein’s monster: misshapen creature, prone to confusion and violent outbursts

Frankenstein's little boy: full of promise, abandoned and left to fend for himself

Frankenstein’s little boy: full of promise, abandoned and left to fend for himself

The possibility that artificial creatures, products of human hands, might achieve sentience and take on an active role in society is an age-old conception in world cultures, the subject of myths, stories, moral fables, and philosophical speculation.

In Greek mythology one finds the tale of Pygmalion who carves a statue named Galatea with whom he falls in love and who eventually comes to life.  In Jewish folklore there are stories of the Golem, an artificial creature animated with surprising results.  Norse legends include reports of clay giants able to move on their own accord.  An ancient Chinese text describes the work of Yan Shi who in the 10th century B.C. crafted a humanoid figure with lifelike qualities.

Both Plato and Aristotle draw upon the myth of the statues of Daedalus, mythical creations that could move, perform certain kinds of work and would wander off on their own unless tied down by a rope.  In the Politics Aristotle uses the metaphor in his defense of slavery:  “… if every tool could perform its own work when ordered, or by seeing what to do in advance, like the statues of Daedalus in the story…master-craftsmen would have no need of assistants and masters no need of slaves.”

World literature, not to mention modern science fiction, contains a great many stories of this kind, ones that are often used to shed light upon basic questions about what it means to be alive, what it means to be conscious, what it means to be human, what membership in society entails.

Within this tradition of thought Mary Shelly’s Frankenstein plays a pivotal role.  Within popular culture, of course, its story has spawned an astonishing range of novels, stories, movies, television programs, advertisements, toys, and costumes, most of which center upon images of monstrosity, horror and the mad scientist.  Beyond these familiar manifestations, however, the novel offers a collection of deeply unsettling reflections upon the human condition, ones brought to focus by modern dreams of creating sentient, artificial, humanoid beings.

In direct, provocative ways the book asks:  What is the relationship between the creator and the thing created?  What are the larger responsibilities of those who seek power through scientific knowledge and technological accomplishment?  What happens when those responsibilities are not recognized or otherwise left unattended?

Questions of this kind concern particular projects that involve attempts to create artificial devices that exhibit features and abilities similar to or even superior to ones associated with human beings.  In a larger sense, however, the problems posed by the novel point to situations in which scientific technologies introduced into nature and society and seem to run out of control, to achieve a certain autonomy, taking on a life of their own beyond the plans and intentions of the persons involved in their creation.

As she addresses issues of this kind, the genius of Mary Shelly is to give voice not only to Victor Frankenstein, his family, friends and acquaintances, but to the creature that sprang from his work and after a time a learns to speak, read and form his thoughts, eager to speak his mind about his situation.  I do not know whether this is the first time in world literature that one finds a serious dialogue between an artificial creation and its creator.  But first instance or not, it is a literary device that Shelly uses with stunning effectiveness.

At their climatic meeting high in the Alps, the creature’s observations and arguments painfully articulate the perils of unfinished, imperfect, carelessly prepared artifice, suddenly released into the world, emphasizing the obligations of the creator as well as the consequences of insensitivity and neglect.

“I am thy creature, and I will be even mild and docile to my natural lord and king if thou wilt also perform they part, that which thou owest me.”

“You propose to kill me.  How dare you sport thus with life?  Do your duty towards me, and I will do mine toward you and the rest of mankind.  If you will comply with my conditions I will leave them and you at peace; but if you refuse, I will glut the maw of death, until it be satiated with the blood of your remaining friends.”

The creature goes on to explain that his greatest desire is to be made part of the human community, something that has been strongly, even brutally, denied him to that point.  His stern admonition to Victor is to recognize that the invention of something powerful, ingenious, even marvelous cannot be the end of the work at hand.  Thoughtful care must be given to its place in the sphere of human relationships.

At first Victor recoils and bitterly denounces the creature’s demands that he recognize, affirm and fulfill his obligations.  But as the threat of violent revenge becomes clear, Victor finally yields to the validity of the argument.  “For the first time,” he admits, “ I felt what the duties of a creator towards his creature were, and that I ought to render him happy before I complained of his wickedness.”

Following that flash of recognition the story careens toward a disastrous conclusion.  Within the wreckage that envelops both Victor and his creature, the book reveals crucial insight, one before its time and with profound implications for similar projects in the future. It can be stated succinctly as follows:  The quest for power through scientific technology often tends to override and obscure the recognition of the profound responsibilities that the possession of such power entails.

Put even more simply:  The impulse to power and control typically comes first, while the recognition of personal and collective moral obligation arrives later, if ever at all.  Within that unfortunate gap – between aspirations to power through science and belated recognitions of responsibility – arise generations of monstrosity.

Mary Shelly’s insights on these matters were prescient, well ahead of their time, and foreshadow some of the most ominous hazards and most ghastly calamities found along the path to modernity from the early 19th century up to the present day.

There are many examples in this vein could cite.  An important contemporary case involves the revelations of Edward Snowden about the runaway computing power within of America’s national security state.  Drawing upon his direct observations and investigations on the job, Snowden went public what he had learned about the ongoing destruction of privacy and spiraling methods of oppression within networks of digital computing and communication.  As he released these previously well-guarded secrets, he fled the USA, saying that he would probably never be able to live in his home country again.  To use the familiar metaphor, Snowden was giving voice to the creation of a high technology Frankenstein, a boundless horror in whose construction he had participated, one already running wild, ravaging the cherished foundations of the American Republic.

As former US Attorney General Eric Holder commented, Snowden  “actually performed a public service by raising the debate that we engaged in and by the changes that we made…” Nevertheless, Holder insisted, the benevolent whistleblower must return to home and stand trial for committing a long list of serious federal crimes.  So it goes for “public service” in the era of complex, menacing information systems.

Another example of Frankenstein’s problem, one from an earlier period, exhibits the mentality that entices creative people to pursue power from scientific technology right away and only to ponder the larger implications later.   Several months after the dropping of atomic bombs on Hiroshima and Nagasaki, physicist Robert Oppenheimer, leader of the Manhattan project gave an address to the American Philosophical Society in Philadelphia.  His remarks focused upon the tremendous enduring power of the forces that nuclear science and technology had unleashed:

“… we have made a thing, a most terrible weapon, that has altered abruptly and profoundly the nature of the world. We have made a thing that by all standards of the world we grew up in, is an evil thing. By so doing, by our participation in making it possible to make these things, we have raised again the question of whether science is good for man,… of whether it is good to learn about the world, to try to understand it, to try to control it, to help gift to the world of men increased insight, increased power.”

He continued, saying,

“Because we are scientists, we must say an unalterable yes to these questions. It is our faith and our commitment, seldom made explicit, even more seldom challenged that knowledge is a good in itself. Knowledge and such power as must come with it.”

In later years, Oppenheimer would agonize about grave problems that nuclear arms pose for world societies.  But his position in Philadelphia amounted to a defense of what, roughly speaking, Mary Shelly describes in Victor Frankenstein’s proud transition from the exotic but unfulfilled dreams of alchemy to mastery of the intellectual and practical powers of natural science.

A stinging response to Oppenheimer’s attempt at self-justification appeared several months later in Lewis Mumford’s essay in The Saturday Review of Literature, “Gentlemen You Are Mad.” In message he attributes to “the awakened ones,” Mumford exclaims:

  “The madmen are planning the end of the world. What they call continued progress in atomic warfare means universal extermination, and what they call national security is organized suicide…”

His advice?

“Stop making the bomb.  Abandon the bomb completely. Dismantle every existing bomb. …Treat the bomb for what it actually is: the visible insanity of a civilization that has ceased to worship life …”

Of course, in predicaments like those of Oppenheimer and Snowden the material products created through of scientific knowledge cannot (yet) speak or reflect upon the moral dilemmas their invention entails. Thus, it often falls to social critics like Mumford as well as to persons labeled “whistleblowers” to give voice to the issues involved.  Within this category one could also include Reverend John Antal, Universalist Unitarian Church minister who resigned as Army chaplain in 2016 to protest of the United States use of killer drones, increasingly “autonomous” devices that have slaughtered more the 7,000 persons in the Middle East, most of whom the Pentagon conveniently labels “terrorists.”  Reverend Antal’s letter to President Obama warned that “The Executive Branch continues to claim the right to kill anyone, anywhere on earth, at any time, for secret reasons, based on secret evidence, in a secret process, undertaken by unidentified officials.”  Antal saw it as his duty to break the terrible silence that others have carefully guarded.

One could offer a great many historical and contemporary illustrations of what I would call “Frankenstein’s problem.”  An appropriate, highly practical, obviously troubling set of developments at present are found within a particular domain of scientific inquiry and application, a zone of works not all that dissimilar from the one the fictional Victor Frankenstein explored – today’ realm of advanced computerization, smart algorithms, artificial intelligence and robotics.  During the past several years there has been a steady steam of books, scholarly papers and reports on technology and the human prospect that contain some astonishing projections.  A short list of publications from computer scientists, business school academics and journalists includes:

Race against the Machine, by Erik Brynjolsson and Andrew McAffee

The Rise of the Robots by Martin Ford;

Our Final Invention: artificial intelligence and the end of the human era, by James Barrat;

Smarter than Us, by Stuart Armstrong,

The Glass Cage: Automation and Us, by Nicholas Carr

The Technological Singularity, but Murray Shanahan

Writings of this kind offer bold historical, technological and philosophical projections of a future in which artificial devices will meet or exceed human abilities and take over many of the activities and productive social roles that ordinary human beings have handled until now.  To investigate the unfolding prospects here, there is now a steady stream of rigorous studies produced by social scientists and business consultants in close touch with relevant fields of science, engineering and industry.

One widely cited study of this kind is “The Future of Employment: How Susceptible Are Jobs to Computerisation,” by Carl Benedict Frey and Michael A. Osborne at the Engineering Sciences Department and Oxford Programme on the Impacts of Future Technology.  The researchers studied expected impacts of computerization on 702 detailed occupations, using plausible criteria to predict which kinds of gainful human activity were at risk from automation.

“According to our estimates around 47 percent of total US employment is in the high risk category. We refer to these as jobs at risk – i.e. jobs we expect could be automated relatively soon, perhaps over the next decade or two.

“Our model predicts that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labour in production occupations, are at risk.  These findings are consistent with recent technological developments documented in the literature. More surprisingly, we find that a substantial share of employment in service occupations, where most US job growth has occurred over the past decades are highly susceptible to computerization.”

Among those vulnerable to diminishing demand are high level professionals in health care diagnosis and legal and financial services.  For example, much of the work that entrance level lawyers and paralegals used to do, reading through documents to find relevant pieces of language and evident, can now be scanned, sorted and analyzed by computers.

In much the same vein a widely cited report, “AI, Robotics and the Future of Jobs” by Pew Research, polled several hundred people involved in the computer industry.  The study found that

“Half of these experts … , envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.

“The other half of the experts who responded to this survey (52%) expect that technology will not displace more jobs than it creates by 2025. To be sure, this group anticipates that many jobs currently performed by humans will be substantially taken over by robots or digital agents by 2025. But they have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution.”

The logic of such arguments is puzzling.  If even high level jobs in, say, law and medicine will be absorbed by smart systems, what’s to say that the new jobs envisioned would not be immediately threatened as well?   As often happens is such reports The Pew study concludes on an obligatory note of optimism, a “Point of agreement” among the experts polled.  “Technology is not destiny .. we control the future we will inhabit.  …. Although technological advancement often seems to take on a mind of its own, humans are in control of the political, social, and economic systems that will ultimately determine whether the coming wave of technological change has positive or negative impact on jobs and employment.”

Along with the stream visionary books and research reports comes a steady stream of excited pronouncements that can be called the panic of the techno-cognoscente.   During the past several years notable scientists, engineers and luminaries in the technology business sector have stepped forward to expressing distress at what the see as dire risks that research in A.I. presents to the human species overall.

In a BBC interview last year, Stephen Hawking warned, “The development of full artificial intelligence could spell the end of the human race. …humans, limited by slow biological evolution, couldn’t compete and world be superseded by AI.” “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

In a live exchange on the internet, Microsoft co-founder Bill Gates offered similar views.   “I am in the camp that is concerned about super intelligence,” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent….A few decades after that though the intelligence is strong enough to be a concern.”

In the same vein, British inventor Clive Sinclair recently told the BBC, “Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive.”  “It’s just an inevitability.”

Studies of and speculation about issues of this kind has inspired the creation of a collection of new research centers at leading universities.  Among them are the Cambridge Center for the Study of Existential Risk and The Future of Life Institute at MIT.  Taken together the shelf of books on AI and Robots, the systematic studies of the future of automation and employment, and the excited warnings about artificial devices superseding human beings as the key actors on the stage of world history are, in my view, a contemporary realization of the prescient concerns and warnings at the heart of Mary Shelly’s book – concerns and warnings about the headlong flight from responsibility.

The emerging literature of these topics and their speculations about both their philosophical and practical dimensions are notable for the underlying disposition they reveal.  It’s worth noting that among those calling attention to the hideous “existential risks” — “summoning the demon” as tech wizard Elon Musk calls it – vanishingly few are suggesting a halt to or moratorium upon such pathways of technological development.

Looking the wisdom of the luminaries, there seems to be little if any willingness to open concerns about the robot apocalypse to a wider public debate beyond the techno-cognoscente in their well-funded think tanks, no desire to include the very workers whose roles, abilities and livelihoods are predicted to become superfluous just around the corner.  Indeed, it seems that within the community of people talking about these matters and appearing to sound the alarm, the preferred way to confront the prospects posed by A.I. and advanced robotics is to fund more A.I. and advanced robotics.

A while back the Institute circulated an Open Letter entitled, “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of computer scientists and technology professionals.  It reads like an RFP Victor Frankenstein and his colleagues at the university at Ingolstadt might have sent around to their friends in the emerging field of life and electricity.

“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.  We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial.”

This all-to-common attitude, characteristic of our “advanced” technological era, is light years behind the position taken by another member of the Geneva summer gathering of 1816 — Lord Byron.  It seems that Byron had followed closely the Luddite uprisings of the period and was strongly opposed to new legislation and the dispatch of army troops to suppress the resistance of local spinners and weavers and to execute them if need be.  Byron’s speech to the House of Lords in 1812 shows both compassion and practical sensibility largely missing in today’s debates about the coming robot apocalypse.

“Had proper meetings been held in the earlier stages of these riots,— had the grievances of these men and their masters (for they also have had their grievances) been fairly weighed and justly examined, I do think that means might have been devised to restore these workmen to their avocations, and tranquillity to the country.”

The intelligentsia within prominent fields of science and technology today are involved in the varieties of wooden-headed moral abandonment that Mary Shelly’s book so painstakingly describes.  Let the new powers be realized as quickly as possible and released into the world.  As results in the environment and society become evident, we can assemble teams of mop and bucket ethicists and policy makers to clean up the mess. It’s worth noting that in most early dramatizations of Frankenstein  within 19th century stage plays in England and America, the creature usually did not speak, much less articulate the kinds of erudite arguments Mary Shelly carefully offers at the novel’s climax.  Along with the violent attacks on intended victims and unfortunate bystanders, the beast’s monstrosity in the theatre was signaled by hideous groans, growls and screams.  The same convention carried over into 20th century motion pictures, most notably James Whale’s Frankenstein (1931) with Boris Karloff as a being constructed from graveyard parts and pieces, equipped with a defective brain, unable to speak and easily provoked into fits of howling rage.  Thus continued a decades long sequence of Frankenstein movies in which the creature is simply a menacing presence lurking in the shadows.

Fortunately, a number of recent depictions of Frankenstein and Frankenstein-like beings have moved beyond shock and horror, returning to Mary Shelly’s central concerns.   In today’s science fiction books, Hollywood films, and television series we often find humanoid creatures equipped with sentience, language, and insight in their condition.  By the same token, human beings who begin as prideful  creators and masters in such screen plays are sometimes able to (or forced to) transcend those benighted roles.  For example, as he appears in early episodes of the TV program “Penny Dreadful,” Victor Frankenstein is a sensitive soul who assembles a body from parts of exhumed corpses and animates it with electric shock.  From its first signs of life, he treats the thing tenderly as if a father embracing a beloved son.  As they walk the streets of the city they eventually begin to converse about the world they share.  Only later, alas, do things go horribly wrong.

In other recent television series we see artificial, conscious humanoid beings as participants in social life, often as de facto family members, with strong albeit often troublesome relationships to homo sapiens, including key roles within social, cultural and political institutions.  What such plot lines suggest is the domestication of Frankenstein’s predicament and a surprising proliferation of its problems.

A television series of this kind is the British-American program “Humans” and its inspiration, the collection of Swedish shows, “Real Humans,” Äkta människor.  In both versions the artificial beings — called “hubots” in the Swedish version and “synths” in the British-American adaptation — are included as workers and in organizations and act as service providers in everyday families.  Marketed as commercial products on display in sprawling showrooms, the androids exhibit different levels of awareness, different varieties of feeling, and different levels of free will.  In both series, small groups of bots mange to escape their captivity and assigned roles, organizing within rebel bands to seek freedom and chart their own destiny.

Today’s stories of human/android relationships are no the longer simple, two dimensional roles and conflicts of traditional sci-fi robot and monster movies, but rather complicated, messy and even subtle.  Thus, in “Humans” the young teenage boy in the family feels a strong sexual attraction to the beautiful female robot, employed as a domestic servant.  Episodes of this kind pose the question: Which kinds of actions, relationships and responsibilities are appropriate and on what grounds?  Are deeds that are improper, immoral or illegal among “real” humans be permissible if the object of the activity were an artificial creature owned as property by a person or family?  It is not too great a leap to recognize that questions of this kind echo concerns that eventually brought the abolition of chattel slavery.

A particularly clever fictional television depiction of artificially intelligent, synthetic humans is part of the edgy, unsettling British television series Black Mirror.  In a segment entitled “Be Right Back” viewers are confronted with a classic theme: a love affair gone wrong.  As the story unfolds we get to know a young unmarried couple, Martha and her boyfriend Ash.  Early on Ash is killed in a car crash.  As she deals with the tragedy, Martha learns that she is pregnant by him.  Shortly after the funeral, a friend tells Martha about a special internet service that enables comprehensive access to everyone’s smartphone calls and online activity, including everything that Ash, an inveterate Net presence, said and did during his final years.  For a nominal monthly fee, the firm’s artificial intelligence programs can construct a realistic voice simulacrum of Ash, one that can conduct conversations, ask and answer questions, and share the deepest feelings of both parties involved.  Although reluctant at first, Martha  eventualy goes online an begins talking to the artificial “Ash” who seems to be exactly the intelligent, soft spoken, caring young man she loved.  Gradually he helps her overcome her grief at his loss.

The next step is, of course, the inevitable upgrade.  Martha learns that she can do more than enjoy Ash’s voice and personality over the Net, but (for a reasonable price) purchase a full sized, smoothly robotic, fleshy lifelike Ash to bring into her home.  Why, it would be almost like having the real person there!  She spends the money and a truck delivers the new manufactured Ash in a box read to boot up.

Without revealing the details of the story, its climax clearly mirrors Mary Shelly’s concerns.  At first artificial Ash is just about everything Martha could possibly want.  But over time certain features in his mannerisms fall short of a convincingly human presence.  Martha finds herself sinking into what robot theorists call “uncanny valley.”  Finally there is a troubled encounter between the two on a wind blown cliff overlooking the sea.  What happens?

What we see next is a flash forward scene several years later in the house where Martha lives.  She and her daughter, now perhaps six years old, are celebrating the girl’s birthday.  Although it is not the time of the week for their regular meeting, the girl asks if she can visit Ash and give him a piece of birthday cake.  Martha pulls down a ladder that leads up to the attic.  Sure enough, there is artificial Ash standing quietly in the corner.  He greets the girl pleasantly and they begin a father/daughter-like conversation.  It’s clear that Martha has stashed him up there by himself as a kind throwaway consumer appliance that she can’t quite bear to send to the landfill.

In this new dramatization of the Frankenstein narrative it appears that ordinary people could well become involved with lifelike artifacts in ways that mirror the kinds of careless disregard and moral abandonment Mary Shelly’s book so painstakingly describes.  The delicately sly script of “Be Right Back” notes the cruelty of situations as we realize that the real reason why the Martha’s daughter wants to visit her artificial dad (who never eats anything) is a ruse so she can have a second piece of birthday cake.

In sum, along with the generally cavalier attitudes found among today’s A.I. and robotics intelligentsia, new echoes of the Frankenstein tale in fictional portrayals demonstrate why Frankenstein is not only a fascinating and entertaining novel, but ultimately a searing work of prophecy.

 

* * * * * * * * * *

An earlier version of this essay was presented at the “Frankenstein’s Shadow” conference at the Brocher Foundation, Geneva, June 2016, on the occasion of the 200th anniversary of the Mary Shelly’s writing of Frankenstein; or, a Modern Prometheus.

Feature image: Public domain https://commons.wikimedia.org/wiki/File:Frankenstein1910.jpg

Langdon Winner

Langdon Winner is a political theorist who focuses upon social and political issues that surround modern technological change. He is the author of Autonomous Technology, a study of the idea of "technology-out-of-control" in modern social thought, The Whale and The Reactor: A Search for Limits in an Age of High Technology, and editor of Democracy in a Technological Society. Praised by The Wall Street Journal as "The leading academic on the politics of technology", Mr. Winner was born and raised in San Luis Obispo, California. He received his B.A., M.A. and Ph.D. in political science from the University of California at Berkeley with a primary focus upon political theory. Langdon is Thomas Phelan Chair of Humanities and Social Sciences in the Department of Science and Technology Studies at Rensselaer Polytechnic Institute in Troy, New York.

Tags: cultural stories, human evolution, robots