Hard Feelings — Inside Out, Silicon Valley, and Why Technologizing Emotion and Memory Is a Dangerous Idea




“MEET THE LITTLE VOICES inside your head,” suggests one of the taglines for Pixar’s latest hit, Inside Out. From the perspective of five discrete emotions, the film follows the struggles of Riley, a preteen girl coping with her family’s cross country move to San Francisco. A critical as well as a commercial success, Inside Out has prompted plenty of commentary since its release in June. Witty and thoughtful, the film has the added virtue of being stimulating for parents and children alike.

At the risk of sounding like killjoys, however, we think Inside Out represents a potentially dangerous line of thinking. While great for selling movie tickets, the film’s depiction of emotions, memory, and the brain in general provides a poor model for teaching people of all ages about how their minds work. More alarmingly, the film reveals just how deeply digital technologies are reshaping — and not for the better, we believe — how we understand our mental and emotional lives.

In other words, Inside Out, like all science fiction, tells us more about our cultural attitudes than about the actual world it depicts — in this case, the world inside a child’s head. Models of the body and mind perforce reflect the concepts and technologies available to any given society. Aristotle compared axes and humans in order to argue that neither can be understood as the sum of their physical parts. Inspired by the Greeks, Descartes famously described the human body as a machine. Kant made the opposite move; he contrasted humans with watches to argue that living organisms are more than simple machines. And until after World War II, computers were literally human — a “computer” referred to someone (usually a woman) tasked with the tedious work of performing complex mathematical calculations.

Inside Out’s personification of five basic emotions — Joy, Sadness, Fear, Anger, and Disgust — presents our 21st-century audience with a model of the mind that is both technical and managerial, something of a cross between a film set and a NASA command center. In the movie, each human is controlled from a “headquarters,” where the five emotions mentioned above navigate a retro control panel imported from a Star Trek movie or a 1950s corporate boardroom. Memories — depicted as pretty, spherical touchscreen video devices — are managed through a file system, each one recorded, sorted, and stored for recall. Personality traits are depicted as self-contained theme parks, while dreams are produced in a movie studio. “Play to the camera: Riley is the camera,” a character exhorts. The mental and emotional life of Riley comes off as a mix of high-tech orchestration and joint-stock company board meeting — her five emotions fight for control and eventually learn to work harmoniously together to manage their charge.

The human child Riley represented or understood as a sophisticated technological automaton managed by a group of “experts” (in this case: “emotions”) constitutes a telling metaphor. It’s a metaphor with legs, obviously, and also disturbing for that reason: it suggests that people can and perhaps ought to be managed through the proper application of technical knowhow. The film’s San Francisco setting and its depiction of emotions as middle managers of the psyche, and stereotypically gendered ones at that, in fact can’t help but suggest Silicon Valley, where human memory and emotions are, like so much else, approached as problems to be managed through the potentially emancipatory, and definitely commercializable, power of digital technology.

Take, for example, the film’s depiction of emotion. Riley’s emotional landscape revolves around just five core emotions, reflecting the allegedly “basic” emotions described in a 1972 paper by Paul Ekman, perhaps the most famous researcher on the psychology of emotion. A consultant on Inside Out, Ekman traveled around the world in a quest to show that certain emotions are universal and easily identifiable by a trained specialist. He created the Facial Action Coding System (FACS) to code the ways in which people’s faces changed when they expressed various feelings. More recently, Ekman has commercialized his work on FACS, offering his services to marketers, law enforcement agencies, and other organizations eager to pinpoint how someone is feeling (happy and in a buying mood, anxious and about to bomb an airplane) by the expression on their face. Ekman’s work has been the basis for the reality television series Lie Detector.

But here’s part of the problem: both Ekman’s five basic emotions and the Facial Action Coding System have been challenged over and over since their inception. Several scholars have had trouble replicating Ekman’s conclusion that there are only five or six basic emotions; others have cast doubt on the universality of emotion in general, suggesting, rightly we believe, that our emotions stem from a more complicated combination of common embodied affects along with culturally specific cues and interactions. As psychology professor Lisa Feldman Barrett, one of the most prominent critics of FACS, notes, “You can certainly be expert at ‘reading’ other people. But you — and Apple, and the Transportation Security Administration — should know that the face isn’t telling the whole story.”

And that’s where the movie’s depiction of emotions, media technologies, and corporate control has real world implications. Silicon Valley is all-too-eager to collect data about our feelings. Paul Ekman’s work on identifying basic emotions is alive and well in facial recognition algorithms that claim to categorize emotional facial expressions presented on-screen, whether a still photo on Facebook or your mug on a security camera — or worse, your webcam as you’re using it for some other purpose. Affectiva, a company developed by two computer scientists who’ve pioneered the application of emotion detection to facial recognition, has made a splash recently with claims that it can now accurately “read” the emotions of an individual in any digital image. The tiny personified emotions inside the heads of Inside Out’s characters actually do just that, “reading” the faces of others. By depicting this face-based visual activity as the chief way we understand each other’s feelings, Inside Out reinforces and legitimates the belief that emotional face recognition works and is worth investing in, despite its shaky theoretical underpinnings and the fact that it divests our emotional lives of nuance and ambiguity, neglects other sensory input, and most importantly, ignores social context.

Another example of how Inside Out says more about our contemporary, digitally obsessed world than it does about actual minds is the film’s use of anxiety as a primary plot driver and a “problem” to be solved. Curiously, the movie lacks a conventional or explicit villain — there’s no Syndrome (The Incredibles) or Charles Muntz (Up) for the “Joy” emotion to overcome. Instead, Joy and her four compatriots are pressured into action by Riley’s jarring transition from Minnesota to San Francisco. In the face of this transition, the emotions have to expertly manage a complex, computer-like system of memories and emotional responses. Fear, Disgust, and Anger have to cope with “anxiety” because Joy’s leadership and expertise are in abeyance. Sadness, too, experiences anxiety: the anxiety of having to stay out of Joy’s way, and of being contained (sometimes literally) within sharply drawn boundaries. Joy, of course, is driven almost exclusively by her anxiety over keeping Riley safe and happy.

No doubt, this sense of anxiety resonates with parents who bring their kids to see the film — they, too, are stressed by the pressures to keep their children safe, nourished, monitored, college-bound, and happy in an economically, technologically, and climactically uncertain world. It’s why, according to the Plan C Collective, “we are all very anxious,” and why so-called “helicopter parents” often turn to surveilling their own children through Facebook and other social media. Yet by depicting a managerial model of the mind, Inside Out is also evoking the particular kind of anxiety that comes from living in a surveillance society marked by highly sophisticated information technologies. Importantly, however, this concern isn’t the anxiety of the surveilled; it isn’t the sort of anxiety that a girl like Riley (whose body and identity are increasingly captured in datasets, quantified, and exploited) will most likely confront throughout her life. Rather, the film gives us a unique and unsettling look into the anxiety of today’s surveillors. The five emotions — and Joy, in particular — stand in for any number of companies or organizations scrambling to keep up with the competitive demands of collecting massive quantities of personal information and of ever more strategically deploying the tools required for data capture, storage, and analysis.

Perhaps the clearest manifestation of this anxiety comes in the image of the dark, ominous “memory pit” situated smack-dab in the middle of Riley’s mind. At times standing in for a more conventional villain, the memory pit is the bad place where bad things happen — it’s Inside Out’s version of Toy Story 3’s incinerator or Finding Nemo’s continental shelf. Joy knows it as the place where memories are dumped to be forgotten. For her and, by extension, for viewers inhabiting her point of view, there is no worse fate for a memory orb than being forgotten, disintegrating into nothingness, and thus no longer being available for recall. The plot device works because memory is for Joy and the other emotions a matter of mechanized information retrieval from a database — in other words: call up the right file (or “orb”), double-click, and voilà there’s your memory, perfectly and discreetly recalled.

But the film’s anxiety around forgetting is less about actual human memory and more representative of surveillors’ anxiety about data storage, retrieval, and analysis. After all, forgetting is a necessary and healthy part of our cognitive functioning; up to a point, it helps our minds focus on the most important pieces of information. It makes the process of mentally recalling that information speedier and more efficient. By contrast, losing data is anathema to today’s technology companies and data scientists, where the “gimme all the data!” mantra reigns supreme. For Joy, losing even one data point (a.k.a. “memory orb”) is terrifying. Even though she hasn’t recalled or made use of that particular memory in ages, she is overwhelmed by the need to preserve it — just in case it becomes useful in the future. It’s companies like Google and Facebook that share such concerns. They build and maintain massive server farms (with their accompanying environmental footprints) to store and make accessible the multiple zettabytes of data produced online.

Inside Out’s depiction of memory is particularly dangerous in the way it works to blur the line between the services that control data about us and ourselves. “While the reality and redundancy of digital information,” write lawyer and researcher Julia Powles and professor of philosophy and ethics Luciano Floridi, “may make it impractical ever to forget completely, we should study ways in which information can be made easier or harder to find, more visible or opaque and, as a result, more useful and less damaging, when required.” Yet portraying the process of forgetting as banishment to an ominous “memory pit” does just the opposite: it reaffirms a Silicon Valley data-driven, totalitarian-tinged worldview underwritten by massive stores of personal data available for perfect and immediate recall.

Most recently, this tension — between the necessarily important role of forgetting for individuals and the desire of institutions to collect, store, and make accessible as much data as possible — has manifested itself in debates around the so-called “right to be forgotten.” In brief, a “right to be forgotten” enables individuals to disassociate from, or not be explicitly connected to, information about themselves that, while true, is no longer relevant to their lives. For some, this may mean that long-settled financial difficulties no longer show up in Google search results of their names; for others, it may mean Google no longer shows results for a specific Wikipedia page. In a particularly Orwellian twist, it may even entail Google removing links to stories that discuss “right to be forgotten” removals themselves. While the film does depict forgetting as integral to how the mind actually works, forgetting is portrayed as lamentable, as we’ve suggested above. The poignant forgetting, for instance, of Bing Bong, Riley’s early childhood imaginary friend, at the film’s climax is another case in point.

From this point of view, the anxieties of the surveillors — be they tech companies or Riley’s emotions — begin to look a lot like the anxieties that have driven government agencies to develop widespread and shockingly pervasive systems of surveillance, from the Stasi in East Germany to the National Security Administration in the United States. And, as big data academic Kate Crawford notes in her discussion of the disparate anxieties of the surveilled and the surveillors, this “epistemic big-data ambition — to collect it all — is both never-ending and deeply flawed,” driven by the myth that more data automatically translates into greater truth.

As the above debates suggest, plenty of scholarly ideas and theories about emotions, privacy, memory, and surveillance are currently at play. Inside Out’s producers insist they worked hard to get the science right, but in choosing to emphasize a few distinct, personified emotions and a surveillance-friendly, techno-managerial image of the mind, they risk flattening our understanding of our own diverse range of feeling and thinking. It doesn’t matter that a film like Inside Out is obviously fiction. The fact is that children and parents watching it will internalize some of the film’s ideas even if they recognize that it is make-believe (just as we found ourselves initially doing after our first screening).

Along with many of our colleagues, we argue that we’re best served by understanding ourselves as complex, reflective people who can knit together feeling, remembering, and thinking in more nuanced ways — and that we should create technologies that help us with these goals, not data-gathering gadgets that apply surveillance as a managerial, even societal constraint. Perhaps we should extend this advice to Hollywood, too. By making a film that personifies our feelings as computer-supported, memory-hungry technocrats, however cute and lovable, Pixar has made a “major emotion picture” that leaves us feeling a little fearful — and not joyful at all.

¤

Anna Lauren Hoffmann studies information, culture, and ethics at the University of California, Berkeley.

Luke Stark researches emotion, privacy, and digital media at New York University.



PRESS ENTER TO SEARCH, OR ESC TO EXIT