This article is an introductory piece for the book "Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society." The book is a dialogue between computer scientist and social critic Joseph Weizenbaum and Gunna Wendt.
The juxtaposition between the potential of technology and its actual performance can be quite discordant. Tools that promise to simplify tasks are used to automate people's work, devices that boast of their connectivity leave users feeling alienated, and machines that propel humanity into space are closely related to missile systems that could bring about human destruction. Joseph Weizenbaum eloquently captured this disconnection in his writings on the "contradictory role" of technology, noting: "Our venture into science and technology has brought us to the brink of self-destruction... while also providing many of us with unprecedented comfort and even self-fulfillment. Some of us have begun to think that this is not a fair trade after all."
As a computer scientist and professor at the Massachusetts Institute of Technology (MIT), Joseph Weizenbaum earned a well-deserved place in computer history for his program ELIZA and his role in the development of the programming language SLIP. However, what set Weizenbaum apart from his peers and colleagues was not his success as a computer scientist but his awareness of the impact of advancements in his field on the broader society. When he discovered that those around him were more interested in technology than in people, Weizenbaum embraced his role as an iconic, even heretical figure — a counterpoint to the ideological embrace of technology.
For Weizenbaum, computers could not be separated from the social environment in which they exist. Thus, he rejected the title of computer critic, preferring to shape himself as a social critic. Like a magician revealing the secrets of his tricks, Weizenbaum warned computer users not to be deceived by mechanical magic, assuring them that they could understand how these machines operate in their lives. Weizenbaum insisted on a spirit that emphasized the need for computer programmers to take responsibility for their creations, positioning himself against those he mocked as "compulsive programmers," "artificial intelligence," and those who refuse to think about the meaning and application of their work.
From 1923 to 2008, Joseph Weizenbaum witnessed significant social, political, and technological transformations. These life experiences left an indelible mark on Weizenbaum's worldview — particularly as he was not a passive observer but an active participant in these changes, especially regarding technological progress. Not only the computers themselves but also the internet and the range of goals for which computers are used were under Weizenbaum's impassioned analysis. As a social critic and computer scientist, Weizenbaum's critiques have lost little of their power over time. This book showcases the richness of Weizenbaum's thought and his belief that issues involving computers are too important to be left solely to computer scientists.
After all, computers and technology still play a contradictory role in society.
From Berlin to Michigan to Massachusetts#
Joseph Weizenbaum was born on January 8, 1923, in Berlin. Although his father had an Orthodox Jewish background, Weizenbaum's upbringing was not particularly religious, even though he and his brother received religious education. While the Weizenbaum family managed to escape Germany before the worst of the Nazi atrocities — leaving for the United States in 1936 — the experiences of growing up amid the rise of fascism left an impression on Weizenbaum that would follow him throughout his life, no matter where he went.
Because his family left Germany when he was only 13 years old — in fact, they left on his 13th birthday — Weizenbaum only began to realize the changes occurring in his birthplace. At the time the Weizenbaum family left Germany, the main group being persecuted by the Nazis, at least from his perspective, were political opponents rather than those of Jewish descent. However, the rise of the Nazis and their anti-Semitic policies did have a direct impact on Weizenbaum's life, as Nazi laws forced him to leave the public school he had been attending and enter a Jewish boys' school. Berlin, and the world around Weizenbaum, increasingly became an unsafe place. The police on the corner transformed from someone a child could seek help from into someone a Jewish child had to avoid. Berlin had become a home frequently visited by the stormtroopers, where terrible things happened in its back rooms, and members of the Hitler Youth lurked on the streets, waiting to attack Weizenbaum on his way home from school — yet these events were merely evidence to the little boy that "we were just living in a cruel society." Although Weizenbaum was not clear on the specific reasons for his family's departure at the time of immigration, he was still aware that his family "had just escaped something evil." As anxiety grew, Weizenbaum anxiously realized that many of his former friends and classmates had remained in Berlin, even as he first went to England and then crossed the Atlantic.
Upon arriving in the United States, Weizenbaum quickly became aware of the differences between himself and his peers. Immigration was not something he had prepared for — it was sudden and forced — and thus he arrived in America without speaking English. After being educated in German schools and Jewish schools in Germany, Weizenbaum found that he had to quickly overcome the knowledge gap between himself and his new peers. He not only had to learn how to live in a new country but also the history of that new country. However, for Weizenbaum, being different was a source of strength as he adapted to life in Detroit, Michigan. He was still struggling with English, but his interest in mathematics grew rapidly, as it was a subject he could understand: because mathematics is a universal language. It was Weizenbaum's love for mathematics that ultimately led him to computers.
After graduating from high school, Weizenbaum attended Wayne State University in Detroit to study mathematics, earning both a bachelor's and a master's degree — although his studies were interrupted by service as a meteorologist in the Army Air Corps during World War II, he returned to his studies after the war ended. Weizenbaum's understanding of computers meant that he was genuinely involved in a field that was still in its infancy, as he had the opportunity to "assist in the construction of computers" while at Wayne State University. The personal computers of the twenty-first century bear only a slight resemblance to the computers Weizenbaum helped build at Wayne State University. In fact, the computer he helped construct "filled an entire auditorium" and was nicknamed "Whirlwind," while the next computer was named "Typhoon." After graduating from Wayne State University, Weizenbaum worked briefly in the private sector, helping American banks develop an automatic bookkeeping and proofreading system called ERMA (Electronic Recording Machine-Accounting) while employed by General Electric. In 1962, he left the corporate world when MIT offered him a visiting professor position.
It was at MIT that Weizenbaum created ELIZA, where he gradually became more concerned with the impact of computers on society.
ELIZA#
While at MIT, Joseph Weizenbaum developed a computer program that secured his place in computer history — the program ELIZA. The name ELIZA was chosen for this "language analysis program because, like the Eliza of Pygmalion fame, it could be taught to ‘speak’ increasingly well." This program allowed a person to communicate with a computer using natural, conversational language. This led to the computer's responses potentially giving the impression that it understood what was being said, and even that the computer was responding. A person "conversing" with ELIZA would input information in natural language via a typewriter connected to the computer running the program. After they entered information, the computer would generate a response and display it on the same machine.
The early scripts for ELIZA operated in a way similar to the methods used by Rogerian therapists — that is, ELIZA would typically respond to the user's input by reflecting the words in the input back as a question. In fact, for this incarnation of ELIZA — sometimes referred to as DOCTOR — human users were actually instructed to interact with the program as if they were truly speaking to a psychiatrist. The reason for this instruction was that it made ELIZA appear as if it were genuinely participating in the conversation, as "psychiatric interviews are one of the few examples of binary natural language communication in which one participant can freely adopt a posture of almost complete ignorance of the real world." The result of ELIZA was that it appeared more conscious — this was the result of human conversationalists projecting their belief of being understood onto the computer program. When ELIZA responded with "Tell me more about your family," the question did not seem to arise from ignorance of family matters, but rather the opposite.
The ELIZA program was able to simulate participation in a real discussion by executing "transformation rules" that were applied when the program detected certain keywords in the text. If ELIZA received a message containing certain keywords, the program would break down the text string containing that keyword and recombine it in a way that prompted further responses. Following these rules, "any sentence in the form of 'I am BLAH' could be transformed into 'How long have you been BLAH,' regardless of the meaning of BLAH." Users would input comments and statements containing keywords, and ELIZA would take these sentences and follow the rules of the script, such as replacing first-person pronouns with second-person pronouns and providing appropriate responses. Additionally, when ELIZA did not detect suitable keywords, the script was designed to provide "an early non-content remark, or under certain conditions, an early transformation" as an answer.
In the first paragraph of Weizenbaum's article about ELIZA, he acknowledges that the computer seems to perform magic; however, "once a specific program is unveiled, once its inner workings are explained in language simple enough to understand, its magic disappears; it now appears to be merely a collection of programs, each of which is easily understood." Following this quote, Weizenbaum clearly explains how ELIZA works step by step — showing that it is not the result of magic, nor a true understanding of the parts of the program, but rather due to clever programming. Of course, to make conversationalists believe they understand ELIZA, ELIZA largely relies on things outside its script. In conversing with ELIZA, "as previously mentioned, the speaker would drape a veneer of seeming understanding over ELIZA's responses." Even when humans know they are merely exchanging information with a computer, even when they know the script and program that produce specific answers, the magic of ELIZA demonstrates "how easy it is to create and maintain the illusion of understanding."
Weizenbaum emphasized that ELIZA does not actually understand the information it receives — even though the responses generated by its script give the impression to the contrary. Once the "conversation" begins, a primary goal of the ELIZA project is to keep the discussion going. It does this by covering up "any misunderstandings of its own" and relying on the good faith of human discussants, so that when faced with evidence of similar hiccups indicating the program does not truly understand the information they input, it does not leave prematurely. This understanding can be difficult because people come from different backgrounds, meaning two individuals may have very different frames of reference. For Weizenbaum, the key is that humans "understand each other within acceptable limits," but programs like ELIZA can only "symbolically process these ideas." This itself is not proof of understanding but merely evidence of successful script execution.
Even though Weizenbaum is skeptical about the extent to which two people can truly understand each other, as its creator, he is confident that he fully understands ELIZA — thus, he is quite surprised by the ways others seem to misunderstand ELIZA. As Weizenbaum wrote, "Those who are very clear that they are conversing with a machine quickly forget this fact," with some even "requesting to be allowed to converse privately with the system, and after talking with it for a while, regardless of how I explain, they insist that the machine really understands them." Furthermore, the degree to which ELIZA could successfully simulate the work of a Rogerian psychiatrist left many psychiatrists impressed, with some even suggesting that the program could be used with real patients. Weizenbaum was puzzled by the reactions to ELIZA, finding himself disturbed by certain trends emerging elsewhere in the field of computer science, such as the tendency to describe humans as similar to computers and to characterize the human brain as "merely a meat machine."
What was clear to Weizenbaum was that computers had become powerful tools in people's lives, and that "we have allowed technological metaphors... to so thoroughly permeate our thought processes that we have ultimately delegated the responsibility for questioning to technology itself." The rise of this technological metaphor is partly due to computer scientists evading responsibility for what they create — even though the proliferation of computers has allowed technological metaphors to spread widely among the public, who do not fully understand how computers work.
After the success of ELIZA, Weizenbaum turned to addressing these challenges — which were more feats of "social criticism" than "computer criticism." Weizenbaum gradually became a prominent critic of technological metaphors, and his position was one of the locations where this metaphor was disseminated.
About "Computer Power and Human Reason"#
Joseph Weizenbaum's book "Computer Power and Human Reason: From Judgment to Calculation" contains a wealth of content: an introductory lecture on the basic workings of computers, a popular presentation of mathematical principles in computer science, attempts to unveil the mysteries of computers, ethical challenges for those working in the field of computing, and a firm critique that just because computers can do something does not mean they should do it. Although the book addresses mathematics and science, even warning unfamiliar readers that they may find these sections difficult, it is a book that does not seek to engage solely with an academic audience. In any case, "Computer Power" is not the first book to denounce the impact of technology on society. The historian and prominent technological critic Lewis Mumford clearly influenced Weizenbaum's thinking; however, a key factor that made Weizenbaum a technological critic is — unlike Mumford — that Weizenbaum was actually a computer scientist.
At the beginning of "Computer Power and Human Reason," Weizenbaum explains his technical credentials and expresses how his experiences dealing with computers prompted him to write a book so critical of these machines. He begins the book with ELIZA, but unlike the extensive technical details he explained to a scientific audience in his article, in "Computer Power," Weizenbaum tells the story of ELIZA in a way that highlights his own surprise at the reactions it elicited. Weizenbaum notes that he was astonished that practicing psychiatrists genuinely believed the program had therapeutic potential, acknowledging that the ease with which people invested emotions in communicating with computers shocked him, and emphasizing his surprise at how many in his field seemed to believe that ELIZA represented a program capable of truly understanding the prompts it received in natural language. However, Weizenbaum did not simply dismiss these odd reactions. Rather, as he stated, these experiences "gradually led me to believe that my experience with ELIZA was a symptom of deeper issues."
Weizenbaum emphasized that computers are not the problem; rather, they merely embody a long-standing dangerous social tendency to view humans in increasingly mechanized ways. In Weizenbaum's view, a debate was erupting, "on one side are those who simply say that computers can, should, and will do everything, and on the other side are those like myself who believe that the things computers should do are limited." The existence of "should" is particularly important for Weizenbaum's argument, as it shifts the focus of the discussion from what functions computers can have to whether they should be built to perform those functions in the first place. For Weizenbaum, this is a question of "the appropriate place of computers in the social order," which also implies that computers have an inappropriate place. Despite Weizenbaum being a respected professor at MIT, he did not hesitate to point out that "science can also be seen as an addictive drug," as "with increasing dosages, science has gradually become a chronic poison."
What distinguishes computers from other tools people use is the degree of autonomy these machines possess — meaning that once they are turned on, they can operate without further human control. Clocks (in homage to Mumford) are an important early example of such autonomous machines, and computers also possess autonomy, but their functions are far more significant than merely keeping time. The significance of these machines lies in their operation according to models of certain aspects of the real world — for example, dividing a day into 24 hours, each hour into 60 minutes, and each minute into 60 seconds. Gradually, while mimicking certain aspects of reality, these automatic machines instill the model of reality into the humans who originally built them. This model replaces what it simulates. Thus, with the support of technology, "experiences of reality must be expressed in numerical form to appear reasonable in the eyes of common sense."
Computers emerged before and after World War II, and in the years following the war, the military, industrial, and commercial sectors viewed them as necessary tools for solving a range of problems that would be impossible for humanity to address without significant technological assistance. As miniaturization allowed smaller computers to fit everything from offices to airplanes, a transitional period steadily occurred in which computers were seen as an indispensable part of emerging modern society. The ultimate result of this trend is that returning to previous ways of doing things is almost unimaginable. However, just because computers are considered indispensable does not mean they truly are. On the contrary, what has happened is that computers have merely become "a necessary condition for social survival, with the form of the computer itself playing a role in shaping that condition."
While computers may appear to fundamentally change society due to their close ties to military needs, in Weizenbaum's estimation, computers "are used to protect America's social and political institutions. This at least temporarily helps them withstand the pressures of significant change." In fact, in Weizenbaum's view, computers have enabled the reduction of social, political, and economic status quo, even though the widespread dispersal of computers has allowed a mechanistic worldview to take root in more areas — while computers have also been used to help support the explosion of post-war consumerism. Whether or not computers are necessarily the right solution, they have been welcomed in a range of fields, "for reasons of fashion or prestige" — if a person's competitor, whether a business rival or a competing superpower, has a computer, then they cannot afford to fall behind. While computers, as their name suggests, excel at computational tasks, there remain social challenges beyond computational capacity — "the effectiveness of technology is a question involving technology and its subjects." However, the worship of computers is merely an adulation of "technology," often overlooking "its subjects." Yet even in an article written in 1976, Weizenbaum recognized that computers had become intricately woven into society, and thus it is important to acknowledge that "the new ways of acting created by society often eliminate the possibility of acting in old ways." Computers rely on specific types of information, excel at specific types of tasks, and are guided by specific social, political, and economic forces — although computers may be portrayed as having opened many doors, Weizenbaum emphasized that they have also closed many doors.
One particular issue represented by computers is that, unlike simple tools, many users of computers know very little about how the machines actually work. One reassuring aspect of computers is their regularity and the way they adhere to routines — but "if we rely on that machine, we become servants of laws we do not know, and thus of a capricious law. This is the root of our distress." To alleviate this distress, in "Computer Power and Human Reason," Weizenbaum delves into how computers work, illustrating complex computational scripts through relatively simple games, emphasizing that people must remember that computers strictly adhere to the rules of the game. While Weizenbaum's discussions of "where the power of computers comes from" and "how computers work" may not be sufficient to instantly turn a novice into a programmer, these chapters still help clarify the reality inside computers. Weizenbaum explains the Turing machine in dense prose that tends toward the technical and elaborates on how computers can stack multiple programs together, noting: "The alphabet of the machine language of all modern computers consists of a set of two symbols, '0' and '1.' However, their vocabularies and transformation rules vary greatly... computers are superb symbol manipulators." Despite the high efficiency of computers, they must still follow their programming rules and rely on their specific languages; a program's ability to successfully execute its script does not mean it has any real understanding of the world. In fact, as Weizenbaum states, "one real reason programming is difficult is that, in most cases, computers know nothing about those aspects of the real world that their programs are supposed to handle."
Computers and the programs that run on them are not organically present in nature. Rather, computers and their programs are physical manifestations of a series of choices made by humans. As a computer scientist and professor at MIT, Weizenbaum was very familiar with those responsible for decision-making that led to the computer systems that the broader public would ultimately use. Although Weizenbaum himself was a programmer, he did not shy away from criticizing his peers. For Weizenbaum, there is a distinction between "professionals" like him and what he referred to as "compulsive programmers," the former "view programming as a means to an end, rather than an end in itself." Weizenbaum likened such individuals to professional gamblers. He described "compulsive programmers" as those who see interaction with computers as an end in itself — even though such individuals may work on many projects, their primary goal is merely to continue working with computers or to "intrude." Weizenbaum's description of "these computer rogues" is distasteful, as they "exist only through computers, only for computers," sketching a stereotype of computer programmers that has persisted to this day — but what makes Weizenbaum's description particularly biting is that he is not imagining this type of person but rather describing a type he frequently encountered during his time as a computer scientist and professor at MIT.
In Weizenbaum's estimation, the allure of "compulsive programmers" to computers stems from a fascination and adoration for the power exhibited within computer systems. While "the pursuit of control is inherent in all technology," computers provide a space where skilled programmers can joyfully seize control. For "compulsive programmers... life is merely a program running on a giant computer," and thus "every aspect of life can ultimately be explained in programming terms." For Weizenbaum, the danger of computers lies in the fact that an increasing number of people engaged in computer work represent these "compulsive programmers," whose loyalty to computers has surpassed any other values. However, Weizenbaum does not attribute this to any evil but rather to a kind of vacuous irresponsibility among certain programmers — as well as other modern scientists and technical experts — who conflate their technical scientific means with their own purposes. Although "compulsive programmers" may not lack skills, this "skill... is aimless, even nihilistic. It is merely disconnected from anything other than the tools it may be exercised upon." However, in Weizenbaum's description of "compulsive programmers," a key detail may be that this is a figure who believes that all the complexities of the world can be simplified to the point of being captured by a computer program.
In some ways, computers represent a stage on which programmers can write scripts and then execute them. Computers excel at executing scripts meticulously, following precise rules, and thus by examining the code they follow, one can understand what computers are doing. The human situation is much trickier. Admittedly, humans process and respond to vast amounts of information, and people's behaviors follow certain "laws that science can discover and formalize within certain scientific frameworks." Nevertheless, Weizenbaum expresses dissatisfaction with the idea that all human wisdom and understanding can be simplified to rules that conform to scientific frameworks — even though belief in the existence of such rules has guided the confidence of some artificial intelligence (AI) researchers. After all, as Weizenbaum states: what kind of equipment does a machine need to "think about human issues like the disappointments of adolescent love"? However, due to the existence of machines, the theories held by computer programmers possess particularly important qualities that make these theories more than mere text. Because "a theory written in the form of a computer program is both a theory when it is put on a computer and a model to which that theory applies when it is run" — computers provide a stage for script execution. Although computers may present impressive results, Weizenbaum warns: "a model is always a simplification, an idealization of what it is meant to simulate." Unfortunately, the complexity of computers and the inherent simplification of models often lead to widespread misunderstandings of what computers can and have accomplished.
"Computers have become a source of a truly powerful and often useful metaphor," yet "the public's embrace of the computer metaphor is based merely on the most vague understanding of a difficult and complex scientific concept." For Weizenbaum, the prevalence of this metaphor represents a dangerous trend, as those who do not fully understand how computers work gradually come to believe that everything in the world can be turned into a computer model. The computer metaphor allows the ideology of "compulsive programmers" to transcend those without programming experience, making them susceptible to the advice of those celebrating the latest technological achievements of computers. The result is that "the metaphor of computers becomes another lighthouse, under whose light people will seek answers to pressing questions, but only in its light." Of course, certain questions are indeed well-suited to be solved using computational methods — some of which even seem to demonstrate some achievement of human intelligence, such as chess skills — but Weizenbaum emphasizes that this victory is related to the computer's ability to perform calculations rapidly and execute logical programs. For some computer scientists, the types of problems computers excel at solving are almost viewed as synonymous with the types of problems humans attempt to solve, but for Weizenbaum, "it is precisely this unreasonable demand for universality that has lowered their use of computers, computing systems, programs, etc., from the status of scientific theory to that of metaphor."
Weizenbaum's personal experience with the misunderstandings surrounding the computer metaphor is primarily related to his experiences with ELIZA. To get a computer to do something, you must tell it to do something. This does not necessarily mean the computer understands what it is being told; it simply means its script allows it to execute a specific command. Humans excel at understanding "communication expressed in natural language," but computers, conversely, require "the precision and clarity of ordinary programming languages." Although Weizenbaum's program ELIZA can respond to prompts in natural language, the program itself does not understand what is being said to it — rather, it is merely following a script — and among those unfamiliar with how computers work, this illusion of understanding seems most pronounced.
ELIZA has many characteristics, but intelligence is not one of them. For Weizenbaum, ELIZA demonstrates the way people are eager to attribute intelligence to machines, a tendency that is unnecessary — a strong inclination he observed among some of his colleagues in the field of artificial intelligence, whom he referred to as "the artificial intelligentsia." While Weizenbaum acknowledges that AI scientists have created programs capable of performing many tasks, he views this tendency as arrogant, believing that any shortcomings of artificial intelligence are merely "found in the programmatic limitations of specific systems." Those unfamiliar with computers have reason to be captivated by ELIZA's performance, but what is the excuse for "artificial intelligence"? Throughout his life, Weizenbaum witnessed tremendous leaps in computer capabilities and recognized that what is impossible for computers today may be possible tomorrow. However, the questions have increasingly shifted from what computers can do to what they should do — whether there are "appropriate human goals that are unsuitable for machines."
"Humans are not machines... computers and humans are not of the same genus," is Weizenbaum's sharp rebuttal to what he perceives as beliefs that have allowed "the grand fantasies of artificial intelligence to develop." For Weizenbaum, intelligence is a complex and difficult concept that cannot be simply simplified. Therefore, attempts to ultimately define intelligence, such as IQ tests, are doomed to capture at most only a glimpse of intelligence. It is clear that computers can succeed in feats that seem to indicate intelligence, but this also oversimplifies the matter. A computer may win at chess, but that does not mean it can change a baby's diaper. For Weizenbaum, these are "incommensurable" questions of vastly different intelligences. Computers excel at tasks involving quantification, but for Weizenbaum, humans possess many things that are fundamentally unquantifiable. Weizenbaum realizes that members of "artificial intelligence" may view this criticism as a scientific challenge, and they may attempt to meet this challenge by creating more complex machines. However, for Weizenbaum, this is not a matter of academic dueling — it is a moral issue — because "the question is not whether such things can be done, but whether it is appropriate to delegate this human function to machines."
Much remains unknown about the human condition — many aspects of being human can only be learned through the honing of human experience — and these aspects are not easily quantifiable or programmable. If computers or robots eventually reach such maturity, exhibiting intelligence truly similar to that of humans, then their intelligence will be quite "alien," as it will have formed under different social conditions and involve a range of different experiences. Beyond intelligence, there are issues of emotion and the unconscious, which are not easily constrained by calculative and cold logic — thus Weizenbaum warns against hubris, as "some things are beyond the full understanding of science." This sentiment serves as both a warning to his scientific colleagues and a reminder to the broader public to remain vigilant against omnipotent claims. For Weizenbaum, this is fundamentally linked to the belief that there are some tasks that are simply unsuitable for computers — "since we currently have no way to make computers intelligent, we should not assign tasks requiring intelligence to computers."
Weizenbaum's warnings came at a time when many tasks unsuitable for computers were already being delegated to them. When Weizenbaum was writing "Computer Power and Human Reason," computers had already become a common feature in businesses and large organizations — even if they had not yet become fixtures in every household. Although these machines were initially sold with the promise that they would be helpful, the current situation is that these machines "are both beyond the understanding of users and have become indispensable to them." What was supposed to provide assistance has turned into something that renders people helpless. This suffering is particularly dangerous for those unfamiliar with how computers work internally, as they have been led to believe that computers are infallible, feeling that their agency and responsibility have been usurped by machines. However, as Weizenbaum fiercely reminds his readers, "the myth of technological, political, and social necessity is a powerful sedative for the conscience. Its effect is to relieve responsibility from anyone who truly believes in it. But in fact, there are actors!" The world envisioned by the celebrants of the computer metaphor is depicted as inevitable — people feeling powerless in the face of machines begin to doubt whether expressing their opposition is meaningful. Rather than being compared to King Canute, they are swept along with the tide.
However, Weizenbaum refuses to accept the comfort of complacency. The rhetoric of science and "technological intellectuals" portrays itself as logically sound, but for Weizenbaum, this is "tool reasoning rather than true human rationality." In a worldview concocted under the support of the computer metaphor, the immense complexity of the world is transformed into "programmed" things, where people talk about "inputs" and "outputs," feedback loops, variables, parameters, processes, and so on, until all connections to specific contexts are abstracted away. This is a passive, indifferent, and impotent secret — because human life is merely a matter of data problems fed into machines and analyzed. An overreliance on "instrumental rationality," along with the technologies that best fit this reasoning, has become a "superstition surrounded by black magic. And only the magician has the right to enter." As a successful computer scientist, Weizenbaum himself is a figure who could join these "magicians," but he is not deceived by instrumental rationality. Indeed, Weizenbaum knows that his dissenting views will be regarded by technical experts as "anti-technology, anti-science, and ultimately anti-intellectual," but Weizenbaum emphasizes, "I am arguing for rationality. But I argue that rationality cannot be separated from intuition and feeling. I advocate for the reasonable use of science and technology, rather than mystifying it or discarding it altogether. I urge the incorporation of ethical thought into scientific planning. What I oppose is the imperialism of instrumental rationality, not rationality itself."
Weizenbaum's call for a reevaluation of ethics does not sound like the cry of Cassandra, nor does he forget that such cries are often ignored. Science and technology do not represent a new force of human civilization, but the scale of their impact has greatly increased — so much so that the power they hold over many has exceeded what can be safely controlled and managed, and it is gradually eroding the ability to choose different paths. Many people at least enjoy the superficial benefits brought by computer-driven technological advancements and hesitate to relinquish these devices, which is a serious issue, but Weizenbaum believes that "ethics, fundamentally, is about relinquishment." If the powers of technology and science have turned humans into entertainment robots, then what is the use? In a world increasingly inundated with computers, what reason is there to truly believe that serious social and political problems persist due to a lack of computing power? If Weizenbaum calls for relinquishment of computers in certain cases, it is because embracing computers in all cases has led to a relinquishment of humanity. As Weizenbaum reiterates in his writings, "there are some human functions that should not be replaced by computers. This has nothing to do with what computers can or cannot do. Respect, understanding, and love are not technical issues."
A keen and strong sense of responsibility drives Weizenbaum to publish provocative articles about computers and those enchanted by these machines. Joseph Weizenbaum is a computer scientist and a scientist who teaches others about computer science, hoping that the messages in "Computer Power and Human Reason" resonate with his peers and colleagues. In Weizenbaum's view, "scientists and technical experts bear a particularly heavy responsibility due to their capabilities, a responsibility that cannot be cloaked in slogans like technological necessity." Therefore, Weizenbaum believes it is crucial for scientists and technicians to think about the consequences of their actions, rediscover their "inner voice," and most importantly, "learn to say 'no'!"
The Need for Responsibility#
Weizenbaum's social criticism is haunted by memories of his childhood escape from the Nazis. In his work, figures associated with fascism do not appear as terrifying monsters or inexplicable evil giants, but rather as real examples of "good Germans" — those who remained ignorant of the horrific events occurring around them. Adolf Hitler and the leadership of the Nazi Party are not detailed in Weizenbaum's studies; instead, he focuses on the image of the "good German" — those who feigned ignorance of the terrors happening around them. The carefully organized nature of the Nazi regime's excuses for keeping "good Germans" in ignorance does not convince Weizenbaum; rather, he posits that "the real reason good Germans did not know is that they never felt responsible to inquire about what happened to their Jewish neighbors, whose apartments suddenly became empty." After all, when Weizenbaum's family fled Berlin, their residence suddenly became empty, and what happens to all those empty apartments when residents disappear for more painful reasons? When Weizenbaum first returned to Germany in the 1950s, he found himself looking at those Germans who had experienced the Nazi era, "wondering, sometimes more, sometimes less, what did you do? Who were you? Did you remain silent? Or did you resist? Or did you participate enthusiastically?"
In addition to the "good Germans," Weizenbaum's professional connections led him to particularly reflect on the lives of scientists who had worked for the Nazi government. In Weizenbaum's estimation, many German scientists during the Nazi regime donned the halo of "good Germans," exhibiting a stance of "we are scientists; politics has nothing to do with us; the Führer decides." This was a convenient position for many German scientists to hide behind after the war, as figures like Wernher von Braun played important roles during the Cold War, ideologically justifying their past affiliations. The Nazism that swept across the continent and ended millions of lives has become something that seems to lack accountability — a dark chapter raised as a warning but never read. The paradox Weizenbaum observed in post-war Germany was that suddenly it seemed no one was responsible for what had happened — citizens claimed ignorance, while scientists behaved as if the uses to which their inventions would be put were none of their concern.
The phrase "never again" is often quoted regarding the brutal rule of the Nazis, but in Weizenbaum's work, this "never again" takes on another quality, as if to say: "we scientists can no longer evade responsibility." As Weizenbaum himself stated, "to take responsibility is a moral issue. Most importantly, it requires recognizing and accepting one's limitations and the limitations of one's tools. Unfortunately, the temptation to do the exact opposite is very great." From the computers crucial to launching the Vietnam War to the advancements in developing more destructive atomic weapons, for Weizenbaum, the work of scientists and technical experts is vital in supporting and enabling violence in the world. The establishment of these means of ending life relies on the tacit approval of science, and for this, Weizenbaum warns his colleagues to remember: "Without us, it cannot continue! Without us, the arms race, especially the atomic race, cannot continue."
Indeed, the scientific community finds it difficult to extricate itself from military ties — as much of the research conducted by many scientists relies heavily on the generous funding of various departments of the Department of Defense. Weizenbaum recognizes the impact this has on his work and that of his colleagues, as he is a professor at MIT, and he is acutely aware that "at MIT, we invented weapons and weapon systems for the Vietnam War... MIT is closely related to the Pentagon." This challenge facing the scientific community extends far beyond the university where Weizenbaum teaches, and he candidly states that scientists must understand the purposes for which their projects are used. "Today, we can almost certainly know that every scientific and technological achievement will be used for military systems whenever possible." Even seemingly harmless programs may ultimately serve violent causes — for Weizenbaum, scientists must recognize this and confront its implications. Computers and science can serve humanitarian purposes, but this does not preclude concerns about their inhumane impacts.
Weizenbaum is committed to never allowing himself to become like the German scientists who acted as if they were irresponsible for the impacts of their work — even as Weizenbaum observes that many of his colleagues are reluctant to heed his calls for strict responsibility. Thus, Weizenbaum becomes a vocal critic of scientists' obligations, a militarized opponent, and refuses to quietly sit in his office during times of injustice — throughout the process, he realizes he has become a "kind of fig leaf" for MIT, but he remains committed to being a figure who upholds moral principles. Weizenbaum is willing to call out "compulsive programmers," "technological intellectuals," and "artificial intelligence," and he is willing to challenge his scientific peers. While other prominent social critics with strong concerns about technology, such as Lewis Mumford and Hans Jonas, have written about the need for scientists to take responsibility — this criticism, when voiced by someone like Weizenbaum from within the scientific community, is imbued with particular vitality. When Weizenbaum urges scientists to take responsibility, he uses himself as an example of what that might look like.
It is this commitment that leads Weizenbaum to engage in debates surrounding various fields of science and technology, not limited to discussions of computers and weapon systems, but also challenging "artificial intelligence." Weizenbaum strongly reacts to the tendency within the "artificial intelligence community" to proclaim that humans are machines. Their central argument is that the whole person can be understood solely from a scientific perspective. For Weizenbaum, this line of thought not only reveals a dangerous tendency toward arrogance but also represents "a profound contempt for life." Marvin Minsky, also a professor at MIT, made a statement that particularly angered Weizenbaum: "the brain is merely a meat machine," a phrase Weizenbaum repeatedly references in his writings. "Meat machine" seems to be a simple encapsulation of the views held by the artificial intelligence community and emphasizes the principles of the technological metaphor. The notion that something as complex as human thought can be simplified into easily quantifiable amounts of information strikes Weizenbaum as absurd, even though he unfortunately realizes that this view is prevalent among "the artificial intelligence community, the AI community, and many scientists, engineers, and ordinary people in significant sectors." Intelligence, whether human or artificial, is difficult to define and even harder to quantify — yet statements about "artificial intelligence" often give the impression that they have reached a conclusive consensus on the meaning of intelligence. Through his study of ELIZA, Weizenbaum personally observed the various types of misunderstandings that could arise regarding computer intelligence and witnessed the perspectives put forth by prominent scientists in the field of artificial intelligence (such as Minsky), recognizing that technological optimism, if somewhat cynical, "the spirit of artificial intelligence permeates many currents in the field of computer practice."
Weizenbaum does not see himself as a critic of computers and technology but rather as a critic of society; however, his social criticism primarily focuses on the impact of computers and technology on society. Thus, Weizenbaum, along with other thinkers, "has expressed serious concern over the unrestrained development of science and technology for years." Although in his youth, Weizenbaum enjoyed the thrill of early computers and played an important role in their development, his experiences working alongside "technological intellectuals" and observing the impact of technology on society tempered his initial enthusiasm. At a conference held in Cambridge, Massachusetts, in 1979, Weizenbaum poignantly stated: "I believe our culture has a weak value system that rarely uses collective welfare, and thus is catastrophically vulnerable in the face of technology." In such a context, the system that overestimates science and technology easily prevails, especially when it simultaneously provides a comforting explanation for the powerlessness experienced by many. The juxtaposition between the potential of technology and its realization is a persistent contradiction, "on one hand, computers make it possible to live in a world rich for everyone, while on the other hand, we are using them to create a world of suffering and chaos." In his work, Weizenbaum frequently refers to the transaction with technology as a "Faustian bargain" — yet this still leaves the question of who should be held accountable for signing the contract with Mephistopheles.
At the conference in Cambridge, Weizenbaum particularly focused on the ways terms like "we" and "us" are used in discussions surrounding technology and science. A machine can arise from the decisions of a few individuals in a laboratory and cause significant global impacts — yet the costs are often framed as "it will serve us." All of this contains "us," representing a strange sense of guilt. Weizenbaum has long criticized those "good Germans" who use ignorance as an excuse, but his criticism of these individuals stems from his belief that their claimed ignorance is an illusion, a deliberate self-deception. However, in the technological transformations of businesses and universities, the ignorance of what is happening is not false. When Weizenbaum was a child, he witnessed the inherent barbarity of the early Nazi regime, but the risks of technological dangers are not a matter for open debate or voting. When Albert Hirschman responded to Weizenbaum's questioning of identity at the conference, he stated: "Every country has the technology it deserves." Weizenbaum countered: "I do not believe that nations deserve things more than people, especially things imposed upon them by others."
If Weizenbaum suggests caution in identifying "we" and "us," then he is even clearer about those he considers "the others" — as these individuals are members of "technological intellectuals." As the scientists behind the machines, wielding the metaphor of technology, the decisions of these individuals are ultimately "imposed" on the broader public. The necessary condition Weizenbaum stipulates is that these "others" must accept responsibility — "scientists and technical experts can no longer evade responsibility for what they have done by appealing to the infinite power of society to change themselves to adapt to new realities and heal the wounds they have inflicted on society. Certain limits have been reached. The changes that new technologies may require may not be realizable, and the inability to realize these changes may mean the destruction of all life. No one has the right to impose such choices on humanity." Science and technology claim to be a panacea, transforming the world so that other remedies are no longer effective. Therefore, Weizenbaum attempts to awaken the ethical imagination of his peers.
Weizenbaum seeks to encourage his colleagues to think from the perspective of utilizing technology and science to ensure that "everyone has access to all the material wealth necessary for a dignified life," and for those who view this goal as unrealistic, he scornfully replies: "The impossible goals I mention here are possible, just as we are capable of destroying humanity."
When Joseph Weizenbaum passed away in 2008 at the age of 85, he had secured a place in the history of computer science, both as an important scientist and as a major critic of the role of computers in society. Throughout his life, Weizenbaum stood at the forefront of significant transformations in computing, from mechanical behemoths that required entire rooms to personal computers, and then to the early incarnations of smartphones. As he witnessed computers becoming smaller, more powerful, and increasingly linked to activities in daily life, his critiques remained steadfast. For him, the military origins of computers could not simply be forgotten as an inconvenient historical detail, and although he did not deny the impressive potential of computers, he remained aware that this potential often goes awry.
Weizenbaum was not a prolific writer, but his articles and his book "Computer Power and Human Reason" continue to have a significant impact on scholars writing about the influence of computers on society. In "The Closed World: Computers and the Politics of Discourse in Cold War America," Paul N. Edwards argues that "tools and metaphors are linked through discourse," drawing on Weizenbaum's considerations of technological metaphors to discuss "how tools and their use shape a component of human discourse and, through discourse, not only directly shape material reality but also shape the psychological models, concepts, and theories that guide this shaping." Weizenbaum's rather harsh description of "compulsive programmers" has also proven to be more than a stereotype that can be easily dismissed. Sherry Turkle argues that Weizenbaum helped expose the image of "hackers," as "many first became aware of the existence of hackers with the publication of Joseph Weizenbaum's 'Computer Power and Human Reason' in 1976." Weizenbaum's portrayal of hackers/compulsive programmers has resonated over the years, even as these individuals have migrated from the dark corners of university computer labs to the boardrooms of large corporations. Wendy Hui Kyong Chun writes: "Weizenbaum argues that programming creates a new mental disorder: compulsive programming," although she counters Weizenbaum's accusation that compulsive programmers derive no pleasure from their pursuits, she certainly points out some individuals, particularly Richard Stallman, "who [fits] Weizenbaum's description of hackers." Clearly, Weizenbaum's encounters with compulsive programmers enabled him to identify something real, even as his work surrounding those embracing technological metaphors allowed him to predict its impact on discourse.
Certain aspects of Weizenbaum's thought have been questioned over the years, and some of his predictions have proven to be completely incorrect, such as his comment in 1978: "Will home computers become as ubiquitous as televisions today?" Weizenbaum countered this by stating: "The answer is almost certainly no." In a world of smartphones, tablets, laptops, and televisions, this "no" has proven to be incorrect, as these computers are all connected to the internet. However, even if some of Weizenbaum's comments about computers have become outdated, his arguments have not lost any moral weight. In "Computer Power and Human Reason," Weizenbaum presents a list of critics who oppose "the unrestrained development of science and technology," and Weizenbaum himself has been included in this group by later writers. David Golumbia critiques the rise of "computationalism" in "The Cultural Logic of Computation," defining it as "the commitment to the view that a large or even totality of human and social experience can be explained through computational processes," noting that "Weizenbaum publicly opposed the view of computationalism and continued to write a compelling book about its problems." When Golumbia places the "notable scholars" he lists as critics of "computationalism" — like those challenging "the free development of science and technology" — together, he includes Weizenbaum alongside many who also appear on Weizenbaum's own list.
Thus, this book provides an important overview of the life and thought of Joseph Weizenbaum. In this wide-ranging discussion, Weizenbaum candidly shares the issues he faced throughout his life — from ELIZA to AI to technological metaphors — and sees how his personal experiences influenced the positions he took throughout his life. Most importantly, this book showcases Weizenbaum's enduring vitality as a social critic regarding the role of technology in contemporary society. While other notable critics, such as Mumford, passed away before they could write anything about the internet — the interviews presented in this book clearly demonstrate that Weizenbaum was unwilling to be deceived by the utopian veneer that some attempted to drape over the internet. In the face of technological advancement, Weizenbaum did not lose his critical spirit. As Weizenbaum wittily wrote: "The internet is a big garbage dump — certainly there are some pearls in it, but you have to find them first." This book portrays Weizenbaum as self-aware and self-deprecating, yet still firmly committed to upholding moral principles — it is not a simple overview of Joseph Weizenbaum as a computer scientist, but rather a tribute to Joseph Weizenbaum as a complex individual and an appreciation of the complexities of other humans.
Those seeking a simple celebration of technology will not find such a celebration in the works of Joseph Weizenbaum. As a social critic, he finds, and continues to prove in this book, that he is committed to voicing uncomfortable opinions. "Computers, like televisions, are embedded in our crazy society. Everything is embedded in this society, and this society is clearly crazy." However, this observation does not fall into nihilistic cynicism or fatalistic despair, but rather believes that people can overcome their powerlessness and reclaim responsibility for their lives. For Weizenbaum, people must become "islands of reason" — even if doing so makes them isolated and lonely, as such islands may still attract like-minded individuals willing to acknowledge that "perhaps we are now addicted to modern science and technology and need to undergo detoxification." Importantly, this is not a call for people to retreat from the world, but rather a call for people to invest more deeply in today's moral dilemmas. In order for people to recognize the madness of the world around them, when this recognition arises, "we should speak out, we should share what we have come to realize with others."
And this is precisely the goal Joseph Weizenbaum sought to achieve. This book is a map to discover the island of reason — and that island of reason is Joseph Weizenbaum.