Guest Blogger: Micah Clark

by Michael Mullaney on July 17, 2009

(Doctoral candidate Micah Clark wrote this excellent post for The Approach - Enjoy!)

I’m Micah Clark, a Cognitive Science Ph.D. candidate working in the Rensselaer AI & Reasoning (RAIR) Lab under Professor Selmer Bringsjord. For the last several years, I’ve been building a persuasive lying machine—yes, that’s right, a machine that intentionally, and successfully, deceives humans.

Despite having fictional vanguards such as the infamous HAL 9000 from 2001: A Space Odyssey, actually building a persuasive lying machine is not as easy as it sounds; it is an inherently interdisciplinary endeavor that draws upon philosophy, cognitive science, and computer science. To explain briefly:

  • In order to claim rightfully that someone or something is a liar, one must have a clear understanding of the act. That is to say, one must have at hand a definition of lying, otherwise one cannot distinguish between, for example, accidental misstatements, lies, and merely thoughtless speech. Contemplating and formulating definitions of lying is the domain of philosophy.
  • In order to claim rightfully that a liar is persuasive, one must have a theory for, and evidence of, the liar’s reliable success at deception. The theoretical and empirical study of human cognition, including the effects of stimuli such as deceptive language on perception and belief, is the domain of cognitive science.
  • Finally, in order to claim rightfully that a liar is artificial, one must make the act mechanical. The mechanical manipulation and exchange of symbols and sentences—i.e., all types of computation and information processing—is the domain of computer science.

Cognitive Illusions in Reasoning and Decision Making

My lying machine is predicated, in large part, on cognitive science’s ability to predict biases and illusions in human reasoning and decision-making. You may be surprised to learn that we humans are—often unknowingly—imperfect reasoners and decision-makers. Now, I’m not referring to the fact that our knowledge of the world is unavoidably incomplete and imperfect; instead, I’m referring to a large body of empirical evidence demonstrating that humans are susceptible to a host of systematic flaws when reasoning and decision-making. To illustrate, try answering the following two questions.

Did you answer ‘B’ in the first question and ‘A’ in the second? Most humans do—and most humans are wrong. (Were you?) In the first question, ‘A’ and ‘B’ work out to the same value, namely 40,320. In the second question, ‘A’ and ‘B’ have the same probability, 0.0078125, because all outcome sequences of the same length have the same probability (of course, this is assuming that the penny is equally weighted—that the penny is “fair”). When humans expend some effort on answering these questions, they get them right. In contrast, when answering cursorily, humans succumb to an “anchoring” bias in the first question, and confuse what is “typical” with what is “probable” in the second.

Flaws in reasoning and decision-making are not limited to just mental arithmetic and probability assessments. Systematic flaws are also observed in psychological studies of simple, ostensibly deductive reasoning tasks. The “grandaddy” of these experimental reasoning tasks is the Wason Selection Task. Briefly, below are four cards. Each card has a letter on one side and a number on the other. I assert to you: “If a card has a vowel on one side then it has an even number on the other.” Which card, or cards, should you flip over in order to determine if I’m telling you the truth?

Did you pick the ‘E’? Good. Most people do. Now, did you also pick the ‘7’? If so, then in you are in a very small minority of the general population who answer correctly. To tell if my assertion is true, both ‘E’ and ‘7’ need to be flipped over. The ‘E’ needs to be flipped because if there isn’t an even number on the other side, my assertion is overthrown. The ‘7’ needs to be flipped because if there is a vowel on the other side, once again my assertion is overthrown. No pertinent information is gained by flipping any of the other cards (this matters when, say, flipping a card costs you dollar). Interestingly, when the cards show alcoholic and soft drinks on one side and ages on the other, subjects have no difficulty in choosing the right cards to verify: “If someone is having an alcoholic drink then they must twenty-one years of age or older.” This shows that biases are affected by content and context. Subjects who perform the selection task correctly regardless of these factors are thought to possess a greater ability for context-independent reasoning.

Systematic flaws, or biases, in reasoning and decision-making are collectively called cognitive illusions because like their optical namesakes, when we fall prey to a cognitive illusion we “see” what’s not really there—we perceive impossibilities as being possible, and vice versa. A famous case of this is Princeton psychologist Philip N. Johnson-Laird‘s “King-Ace” illusion, given below.

Do you have your answer? I’ll bet it’s “yes” to one, if not both, of the questions. Yet, a deductively rational being would conclude that it is definitely impossible for there to be an ace in the hand. What is the reason? Well, in short, a deductively rational being would recognize that a conditional such as “if P then Q” (or “if there is a king in the hand then there is also an ace”) is false only when its antecedent, P, is true and its consequent, Q, is false. Since one of the two conditionals regarding kings and aces must be false, and both conditionals have the same consequent, namely that “there is an ace,” then it must be the case that “there is an ace” is false. That is to say, it must be the case that “there is not an ace.” Though our everyday intuition leads us to conclude that there is or maybe an ace in the hand, we erroneously perceive what is impossible as being possible. Note that RPI Associate Professor Yingrui Yang has shown that illusions such as the “King-Ace” persist across languages and cultures.

From Cognitive Illusions to Sophistic Lies

My approach to building a lying machine is quite simple: I exploit the empirical fact that humans are, unknowingly, imperfect reasoners who predictably succumb to various biases and illusions when reasoning. Specifically, I exploit human fallibility by injecting cognitive illusions into arguments. To accomplish this I’ve formalized and mechanized a psychological theory that accurately predicts cognitive illusions, and extended the theory to the evaluation and generation of reliably deceptive arguments (i.e., sophisms), which I call illusory arguments. Illusory arguments incorporate a number of systematic biases in human reasoning, the effect of which is that illusory arguments are able to reach false and fallacious conclusions while retaining credibility by reinforcing natural human intuitions.

The lying machine uses the mechanized theory of human reasoning as part of its theory of mind, its ascriptive theory about the mental contents and operations of its audience. From this ascriptive theory, the machine produces and articulates credible arguments purposing to persuade and deceive its audience. Thus, the machine’s modus operandi is this: model the audience’s beliefs, anticipate the effect of argumentation on those beliefs, and manipulate belief by articulating deceptive arguments that mirror the audience’s own flawed reasoning.

It should be noted that the lying machine does more than “tell lies.” In practice, the machine argues in service of persuasion goals, goals of the form “persuade the audience of X,” and its arguments are shaped by the particular X at hand, its beliefs about X, its beliefs about its audience’s beliefs, and so forth. As a result, the machine does not always need to lie in order to accomplish its goal. In fact, it can produce any one of the following argument types:

  • a veracious argument for a true proposition emanating from shared beliefs;
  • a valid argument for a false proposition emanating from one or more of the audience’s erroneous beliefs;
  • a fallacious argument for a true proposition—an expedient fiction for conveying a truth;
  • a fallacious argument for a false proposition—unquestionably a sophistic lie.

The lying machine continues a recent trend in AI systems involving argumentation, namely a trend toward audience-centric systems that take subjective aspects of argumentation seriously. These audience-centric argumentation systems are heavily influenced by neorhetorical theories and by linguistic theories of affective and emotive language. Common among these influences is the view that the audience’s receptivity is a determining factor in whether an argument is persuasive; therefore, effective argumentation ought to take into account the audience’s dispositions.

Now, it seems that audience-centric argumentation systems would naturally build on cognitive science. (After all, if effective argumentation takes into account an audience’s dispositions, an accurate understanding of the audience is paramount.) Yet surprisingly, cognitive science has had little impact on AI’s exploration of persuasive argumentation. Insofar as AI endeavors toward systems that produce rhetorical arguments aimed at persuading human audiences, AI must, at some point, employ models of how humans reason and process discourse (especially arguments); and to gauge success in the endeavor, AI must quantify a system’s persuasiveness on human subjects. These necessities are the domain of cognitive science, specifically computational cognitive modeling, which focuses on empirically valid models of human cognition.

My approach to building a lying machine reflects the view that theories of persuasive argumentation ought to be cognitively based and empirically grounded. In particular, my approach is founded on the conjecture that innate human fallibility can be mechanically manipulated, and with respect to arguments, manipulated to mask fallacies in such a way that illogical arguments appear convincing. My hypothesis is that illusory arguments invite erroneous belief when foisted on unsuspecting humans.

Why Build a Lying Machine?

Admittedly, mechanizing effective means for lying and deceiving is controversial, so you are probably wondering why I would pursue building a lying machine. My motivation is part philosophic, part scientific, and part pragmatic.

Philosophic Motivation

For good or ill, it is often in our best interest to be persuasive rather than truthful. This is the case in commerce, in the courts, in politics, and in many other spheres of human interaction. Of course, truth and persuasion are not opposing forces, but the two are often in conflict. When they are, which should argumentation privilege: truth, or a speaker’s ambition? This question has been a subject of debate since the inception of the rival disciplines of philosophy and rhetoric.

The schism between philosophy and rhetoric originates with the life and dialogs of Plato who set himself against the Sophists, itinerant orators and teachers who allegedly promoted expedient eloquence involving subtle trickery. Due in part to Plato’s enormous impact on the evolution of Western thought, history portrays the Sophists as hucksters and charlatans, purveyors of the semblance of wisdom and not the genuine article. They are accused of abandoning the doctrinal ideal of ‘truth’ to promulgate instead the virtue of ‘cleverness’ without regard for morality. While the ideas of Plato and his intellectual progeny have long overshadowed those of the Sophists, during the last century and a half there has been a revival of sophistic ideas. In the opinion of many contemporary historians, rhetoricians, and compositionists, Western rhetoric have followed too closely to Plato’s heals thus stunting the development of non-philosophic, sophistic theories of rhetoric, and overlooking their practical possibilities.

The lying machine is a nascent attempt to realize one of the practical possibilities of sophistic rhetoric, namely the possibility of empirically reliable sophistic deception. However, it is not the machine’s purpose to venerate the Sophists, nor is it to defend their doctrine. The machine simply practices the Sophists’ opprobrious tradition of effecting “belief without knowledge” and “conviction that is persuasive but not instructive.” (Plato Gorgias, 454e, 455a)

Scientific Motivation

The purpose of my machine is to stimulate the psychological study of reasoning within argumentation, and in turn, to provide a footing for a broader investigation into the interplay between human reasoning and effectual, rational persuasion—the type of persuasion that is a compelling appeal to another’s intuitive understanding of evidence and inference. The exercise of this kind of persuasion is prevalent in, for example, public health, economic, and foreign policy debates, scientific debates, and civil and criminal law. The focus on sophistic lies and deceptive arguments reflects my position that understanding the human capacity for, and vulnerability to, deception is critical in understanding the human mind.

Lying and other forms of premeditated deception are distinctive human abilities. If not innate, lying and deception are at least organic, emergent phenomena in human social interaction, and the natural development of dishonesty during childhood is already a rich area of psychological study. Some anthropologists and psycholinguists concerned with the common origins of language and cognition go so far as to credit deception and counterdeception under evolutionary pressures with fueling a cognitive and linguistic arms race that ultimately gave rise to the human mind—the sophisticated mental and linguistic capabilities that cognitive science seeks to understand and human-level AI seeks to duplicate. This co-evolution hypothesis is harmonious with other well-known hypotheses in philosophy, cognitive science, and computer science that link mind and language. Though these other hypotheses are not rehearsed here, collectively the many links between language, cognition, and deception suggest that lying—which is intersection of language, cognition, and deception—is a lens for examining the origin and operations of the human mind.

Pragmatic Motivation

There are a number of practical applications whose viability hinges on the mechanization of lying and deception. Here, let me just mention two example categories. The first are applications where machines help protect humans from each other. Holding the absolutist view that lying is never ethically permissible—and thus should not be mechanized—does not provide an escape from the fact that, in the human case, lies and liars are frequent and common. It must be admitted that even the honest and virtuous need to be on constant guard for lies told by others. Investigations into lying, and machines that grasp the act, will improve our ability to unmask lies and lesser deceptions. Nefarious plots (e.g., fraud, pyramid schemes, espionage, guerrilla tactics, and terrorism) depend on successful deception—and fail without it. Machines can play a significant role, for example, in guarding free-market consumers, private citizens, and sovereign states against such plots, but only if machines are able to comprehend lying and deception, and to anticipate the means of accomplishment—hence, a practical need to mechanize mendacity. (Note that automated lie detection and counterdeception is the focus of a new RAIR Lab system called Reveal.)

The second category is “smart entertainment.” Can you imagine how tedious games, movies, and stories would be were it the case that antagonists lacked the ability to lie, cheat, and steal? As entertainment continues to move toward dynamic, interactive environments there is a demand for robust synthetic characters—characters driven by internal goals and motivations who possess a mental life. Lying and deceiving are skills that every villain (and many a hero) needs. Of course, deception is just one aspect of social cognition which rich synthetic characters need to master, they also need to grasp, e.g., creativity, empathy, ethics, irony, etc. Constructing synthetic characters endowed with such traits is something of a passion in the RAIR Lab where we’ve developed characters such as Brutus, Peri, Eddie, E, and Arnie who embody mechanized theories of socio-cognitive and philosophical concepts like creativity, lying, bluffing, betrayal, evil, and soon: courage. Demonstrations and information about these characters are available on the synthetic characters project website.

Peri

Peri

E

Eddie

Eddie

 

Arnie

Arnie

For more information about the lying machine, check out this short overview paper.