Users generally trust computer interfaces to accurately re- flect system state. Reflecting
that state dishonestly— through deception—is viewed negatively by users, rejected by
designers, and largely ignored in HCI research. Many believe outright deception should
not exist in good design. For example, many design guidelines assert: “Do not lie to your
users” (e.g., [40, 45]) Misleading interfaces are usually attributed to bugs or poor design.
However, in reality, de- ceit often occurs both in practice and in research. We con- tend
that deception often helps rather than harms the user, a form we term benevolent
deception. However, the over- loading of “deception” as entirely negative coupled with
the lack of research on the topic, makes the application of de- ception as a design
pattern problematic and ad hoc.
Benevolent deception is ubiquitous in
real-world system designs, although it is
rarely described in such terms. One
example of benevolent deception can be
seen in a robotic physical therapy system
to help people regain movement
following a stroke [8]. Here, the robot
therapist provides stroke patients with
visual feedback on the amount of force
they exert. Patients often have selfimposed
limits, believ- ing, for example,
that they can only exert a certain amount
of force. The system helps patients
overcome their percep- tive limits by
underreporting the amount of force the
patient actually exerts and encouraging
additional force.
The line between malevolent and
benevolent deception is fuzzy when the
beneficiary of the deception is
ambiguous. For example, take the case of
deception in phone systems to mask
disruptive failure modes: The connection
of two indi- viduals over a phone line is
managed by an enormous spe- cialized
piece of hardware known as an Electronic
Switch- ing System (ESS). The first such
system, the 1ESS, was designed to provide
reliable phone communication, but given
the restrictions of early 1960s hardware, it
sometimes had unavoidable, though rare,
failures. Although the 1ESS knew when it
failed, it was designed to connect the
As is the case with the 1ESS and placebo buttons, deception sometimes benefits the
system designer, service provider, or business owner. However, this does not
invalidate the fact that it might also help meet user needs. We believe that by not
acknowledging that there is deception, and, more critically, that a line between
beneficial and harmful deceptions might exist, research in the area is difficult to pursue—to
the detriment of academics and practitioners alike.
A further example of benevolent deception are the
“placebo buttons” that allow users to feel as though
they have control over their environment when they
actually do not. Cross- walk buttons, elevator
buttons, and thermostats [33, 47] often provide no
functionality beyond making their users feel as
though they can affect their environment. Some of
these buttons go far to provide the illusion of
control; non- working thermostat buttons, for
example, are sometimes designed to hiss when
pressed [2]. In addition to providing the feeling of
control, placebo buttons can signal the exist- ence of
a feature to the user. Non-working crosswalk buttons,
for example, clearly convey to a pedestrian that
a crosswalk exists.
Whether intentional or not, implicit or explicit, acknowl- edged or not, benevolent
deceit exists in HCI. Nonetheless, little is known about the motivation, mechanisms,
detecta- bility, effectiveness, successes, failures, and ethics of this type of deception.
Researchers have tiptoed around this taboo topic, concentrating instead on malevolent
deception (e.g., malware or malicious software [14,17]) and unobjec- tionable forms of
deception described using entertainment metaphors (e.g., magic or theater [32,54]).
This limited view of deception does not capture its variety or ubiquity.
As we will see, one of the underlying reason for the ubiqui- ty of deception is that it can
fill the many of the gaps and tensions that emerge with different design concerns (e.g.,
the good of the individual versus the good of the group), design goals (e.g., conflicting
principles), or systems states (e.g., desired system performance versus actual system
per- formance). In any situation where a poor fit exists between desire (e.g., the mental
model or user expectations) and reality (e.g., the system itself) there is an opportunity
to employ deception. This gap—which is extremely com- mon—both motivates and