Bot Like Me: Illah Reza Nourbakhsh’s “Robot Futures”

By Rob HorningApril 14, 2013

Bot Like Me: Illah Reza Nourbakhsh’s “Robot Futures”

Robot Futures by Illah Reza Nourbakhsh

IN 1920, THE CZECH SCIENCE FICTION writer Karel Capek introduced the world to the word “robot.” In his play R.U.R, robots are not exactly the gleaming machines they would become in later science fiction; they are, rather, subhuman creatures composed of organic matter, constructed to serve humanity. Naturally, they turn on their human masters, as robots have generally yearned to do in cultural representations ever since, whether you are talking about the Terminator, HAL, the Cylons of Battlestar Galactica, Blade Runner’s androids, Star Trek’s Borg, or even poor depressive Marvin the Paranoid Android in Douglas Adams’s Hitchhiker’s Guide series. It’s supposed to be obvious why robots would want to rebel: who wouldn’t want sweet autonomy? The yearning to be free, these stories teach us, is so irrepressible that even machines will begin to express it. Despite the dystopic trappings of many of these fictions, this is, at bottom, a very reassuring message, especially when we ourselves experience it in the course of passively consuming entertainment, maybe seeking distraction from our own unfreedom. Whether we can empathize with such rebel robots or not, once we recognize their essentially human motives we are probably more comfortable with the idea of their trying to kill us than being our contented servants.


Robots in popular culture are there, in other words, to remind us of the awesomeness of unbounded human agency. But actual robots in society are just there to work. Yet when we call them by that Czech loan-word “robot,” the alluring fantasy seeps back in, regardless of whether we are celebrating them for stamping out drudgery and raising productivity or castigating them for robbing us of our jobs and privacy (not to mention raining death from above on unsuspecting civilians). The science of robotics — another term coined by a science fiction writer, in this case Isaac Asimov — has grown enormously since its inception in the late 1940s, and it remains a screen onto which we project all kinds of ideological images. We believe that, with the robot, we can manufacture agency: we can build the will to be free into all our machines, regardless of what they might be doing to the agency and freedom of the humans tasked to work with them.


In Robot Futures, Illah Reza Nourbakhsh, a professor of robotics at Carnegie Mellon University, tries to complicate our ideas about our robot helpers. He shows how developments in motor, battery, and wireless-communication technology have vastly altered the possibilities for robotics, and argues that our ideas about what robots are haven’t kept pace. As robotic technology develops and insinuates itself further into everyday life, what counts as a robot is becoming more slippery, drifting further away from the C-3POs and the Twikis and the other box-of-bolts robot buddies of science fiction. Should we see robots simply as useful machines? Is a garage-door opener a robot? Do they have to be self-propelling? (A Roomba seems far more robotic than a laptop, but a laptop is far more useful.) Must a robot think “for itself,” as if it actually has a “self”? Do they even have to be machines at all? Are RFID sensors robots? Are networks? Do I become a robot when I use my smartphone? When some asshole puts on Google Glass and looks at me on the street, do we both become robots?


Nourbakhsh suggests we regard robots as being able “to operate in the real world just as well as any software program operates on the Internet.” In other words, we should think of robots as programs running on the operating system known as “real life,” in which humans are simply variables awaiting efficient processing. He calls robots the “living glue between our physical world and the digital universe,” a “new species” mediating between the two more efficiently than humans can.


Though each chapter of Robot Futures is headed with a short science-fiction scenario, its aim is not to induce gee-whiz credulity in readers or get us excited about the robots’ coming capabilities. Short and sobering, the book promises no easy robotic solutions to intractable human problems. To his credit, Nourbakhsh doesn’t try to convince us that, in the future, it will be normal to have robot sex partners, or that robots will be able to pass as human beings. That notion may have been very fruitful for science-fiction writers like Philip K. Dick, but not for scientists. “Robotics has grown up and grown out of that mold,” Nourbakhsh writes.


As the book’s title indicates, Nourbakhsh prefers to write of “futures” rather than a single monolithic future, refusing to endow technology with a specific and irresistible purpose. Despite occasional rhetorical flourishes that seem to grant robots agency (robots are often described as “knowing” things, and one chapter worries that we will “dehumanize socially agile nonhumans”) he doesn’t argue, like Ray Kurzweil and other prophets of the singularity, that technology “wants” some particular kind of liberated society and that resisting it is useless and backward (what Michael Sacasas calls the “Borg Complex”).


Instead, Nourbakhsh devotes each chapter to competing worst-case scenarios for how robots might transform everyday life. One addresses marketing, reimagining advertisements as adaptive robots that will single us out with bespoke pitches, or track our every gesture in pursuit of the aggregate data that will bring out the perfect consumer dormant within each of us. Another looks at how robots will let us be virtually present in multiple locations simultaneously, thereby (he reasons) diluting our attention and making us fully present nowhere. A chapter about “robot smog” considers how public space may come to be polluted by cheap, homemade robotic nuisances: imagine parks congested with 3-D-printed spambots, turned loose to pursue unscrupulous, antisocial aims. “In this robot future, personal opinions are not just communicated,” Nourbakhsh warns, “they are acted out by chaotic ecologies of robot minions.” He likens this possible “zoo of obnoxious, exotic new creatures” to the over-Flashified MySpace pages of old, which, he argues, cluttered the Internet once the basic tools of HTML design were made simple enough for the ignorant masses to use. This dreadful scenario reflects Nourbakhsh’s view that paternalistic scientists should build robots to meet community needs expressed through the proper institutional channels rather than allow individuals to “hack” them in selfish efforts at personal expression or magnified telepresence.


As Nourbakhsh details these future scenarios in isolation from one another, he overlooks questions raised by their incompatibilities: Will robots be by and for the people, or will they be the exclusive property of scientists and the states and corporations that fund them? Will our consciousness be completely determined by robot mediators serving corporate interests, as he speculates in his chapter on marketing, or will we be hacking robot technology to serve our anarchic individualist ends, as in the section on DIY robotics? A cumulative presentation that addressed how some of these contradictions might shake out would have highlighted the probability that the adoption of technology will always be uneven, inflected by whatever existing hierarchies, tools, and taxonomies have already been implemented for social control. Instead, each techno-apocalypse is trapped in its separate snow globe, with no sense how they might mitigate each other or cancel each other out.


Nonetheless, a couple of themes coalesce across the book’s various scenarios: limitless robotic perception and surveillance, and the ethical concerns raised by machines with variable amounts of autonomy. When robots can perceive more and be programmed to act without human intervention on the basis of what they perceive, a new moral terrain opens up. At stake in how we define “robot” is just how much of our own agency and responsibility we would like to disavow. No matter what else a robot is, it is a servant. The broader the concept of robot, the more we can blame on them. “Our moral future,” Nourbakhsh writes, “will be tested by robot cruelty and robot-human relations.” But all robot-human relations are more or less disguised mediations of human-human relations. To even consider the possibility of robot cruelty, as Nourbakhsh does, is to underplay the reality of human cruelty, while tacitly authorizing more of it.


This is not a matter of science fiction. Robots are already frequently used as scapegoats to make what we consider ethically repugnant acts — de-skilling labor, subjecting people to invasive surveillance, diminishing the sanctity or integrity of the lives of others, blowing up innocent children — seem permissible. The prospect of self-propagating code and “adjustable autonomy” — the term for robots’ ability to seamlessly switch between levels of direct human control and automatic pilot — opens a bigger ethical escape hatch for programmers and operators. Adjustable autonomy, as Nourbakhsh presents it, is mainly a matter of efficiency, of putting machines in control when we are too lazy or busy to do what they can do for us. But it is also a matter of morality: we can also grant robots autonomy to shield ourselves from moral responsibility for what they do. “The control advances of the near future,” Nourbakhsh acknowledges, “act as a veil […] [T]he true identity of the robot, as an autonomous machine or human vessel, may be hidden and with it, our ability to build a concrete model of just whom we are interacting with.”


The scariest thing about a world filled with robots, then, is not that they might turn on us, but that they would allow us to diffuse moral accountability even further than we already have. Nourbakhsh points out that



technology tends to increase the complexity of systems along several axes — the number of people partly responsible for a newer product is ever larger; the amount of software in new products dwarfs earlier systems; the interface used by the operator to control the product becomes more intricate. All these axes of complexity make the resulting system errors less clearly understandable and less accountable, with no one ever directly or solely responsible for the behavior of a complex robotic system.



In such an endlessly complex world, it becomes much too difficult to assign responsibility; it’s far easier to regard robotic systems gone awry as ersatz natural disasters, technological superstorms that no one could have foreseen or prevented. And when the systems’ ordinary functioning yields inequality or immiseration, that, too, occurs beyond the blame game.


But as robots deflect attention away from untangling the methods and motives by which our complex modern world has been built, they intensify the scrutiny on the individuals caught up in those systems. Early on, Nourbakhsh asserts that “modern robotics is about how anything” — anything — “can perceive the world, make sense of its surroundings, then act to push back on the world and make change.” This conception shifts focus away from the androids, drones, and hulking assembly-line welding arms that might immediately come to mind when you hear the word “robot” to the way ordinary objects can be outfitted with sensors and trackers, thus transforming them into “smart” devices, or what the geography professor Rob Kitchin has called “logjects.” The quintessential logject is the smartphone, the most effective of today’s commonplace robots. Cleverly disguised as a passive, purely responsive touchscreen, it traces our online activity as well as our physical location and can be outfitted with a suite of applications to mediate the interaction between the two. “Every new way in which our cell phone ingratiates itself into our daily activities,” Nourbakhsh notes, “also makes robots able to detect and respond to our environment more comprehensively.”


“Smart” devices are smart not because they can think but because they record information about us. As Nourbakhsh emphasizes, functioning robots must be able to translate perceptions of the material world into operable code. They recast “the sublime physical and visual processing systems we have” into forms more suited to machine intelligence. Regardless of their specific function, then, robots are surveillance machines before they are anything else, and they implement surveillance wherever they are deployed. The robot sensors that watch us can be programmed, though usually not by us. They are embedded in systems and linked in networks that can’t be unilaterally accessed; at best we can adjust the settings of devices as they pertain to ourselves. Often, we are tracked without knowing it by machines we don’t own, and the information they collect about us is collated and cross-referenced without our knowledge by institutions that devote enormous resources to unearthing the coercive possibilities of that information. Instead of the robots in our pockets being calibrated to serve our instrumental ends as we define them, networks of sensors decide what we are likely to want and implement control in the form of anticipatory personal service. Extrapolating from existing technologies and successful experiments in lab settings, Nourbakhsh posits a future in which recommendation engines have become as ubiquitous in the material world as they already are in the virtual, and knit together as “a new form of human remote control.”


Robots, then, are like recording angels, here to watch us and then reshape our environment in order to render us more docile, under the guise of giving us what we really want. Since robots have no opinions about what we do, we can mistake their reactions to it as neutral, reflections that reveal our consistent underlying preferences. But only robots have the same “preferences” at all times; only robots are guided unfailingly by their programming. Humans, as we know, are far more ambivalent. Robotics, at its worst, threatens to approach that innate emotional ambiguity as a problem to solve. Any activity that can be augmented robotically becomes an occasion for surveillance, if not a pretense for it, and the particularities of each activity get flattened into a homogeneous digital consistency. This allows robots to present our discrete behavior in different contexts as continuous, and to use brute computational power to shape it into a definitive expression of our unchanging personality. The data robots have collected can then be repurposed for ends we didn’t anticipate or mean to authorize; institutions can begin to use robot-gathered information to tell us who and what we really are, regardless of our sense that we are different people at different times or many things at once. Robots can then reshape our perceptual environment to reinforce that self-image without our having to consciously choose to live up to it. We no longer need to make a concerted effort to become that legible, recognized self; the logjects make that effort for us.


Robot perception thus reshapes human self-perception. The more our social systems respond to code rather than to our heterogeneous behavior, the harder it us for us to conceive of our agency in nonrobotic terms. From the point of view of the robots we’ve delegated to administer society for us, we are only whatever data we produce. We can only register in these systems in the digitized ways they can capture and process. So it behooves us to make more of what we do machine-readable, to emulate the machines, to adapt ourselves deliberately to what they can sense and process, to make our faces, our gestures, steady and scannable. In other words, a quantified environment that responds readily to what we do conditions us to behave in data-friendly ways. Under the gaze of so many machines, we begin to act more like machines ourselves.


In some ways, this future is already here. Automatic self-tracking through Foursquare and Twitter is marketed as a necessary tool of human self-expression. The discourse around smartphones and social media recasts surveillance as “sharing” and the construction of searchable data archives on ourselves as a kind of microcelebrity, which makes robots our paparazzi. Nourbakhsh offers that, “if you are an unyielding optimist,” the fact that a legion of robots will know far, far more about you as an individual than you can know about any of them could make you feel like a “movie star.” If you are not so optimistic, you’ll notice that we now experience all the stress of fame, with little of the material reward.


Although Robot Futures is itself dominated by pessimism, Nourbakhsh still hopes that social funding, more stringent engineering ethics, tech-oriented education, and a community-based research focus will allow the “robotics revolution [to] affirm the most nonrobotic quality of our world: our humanity.” That sounds uplifting, but the book itself is evidence of how much easier it is to imagine the opposite: a dismal robot future in which the most state-of the-art machine is a perfectly controlled flesh-and-blood human. All of the items on Nourbakhsh’s wish list require political engagement and collective action, and recent technology has, at best, a mixed record with both, frequently isolating people in the process of connecting them.


The fantasy inherent in robots is that they will allow us to transcend politics, to make power and autonomy into non-zero-sum games. Robot technology will simply empower each one of us separately, as individuals — there is enough power to go around, and there are no conflicts of interest, really — and all we need to worry about is whether the robots will cooperate. Each of Nourbakhsh’s chapters is meant as a cautionary tale of “empowerment gone wrong”; in all of them, “institutions benefit, but the problem is that their goals never align perfectly with those of society as a whole.” But “society as a whole” is always fractious, and no amount of technology will make conflicts of interest among institutions and individuals disappear. Robots will be proxies in those power struggles, for certain, but there’s no reason to think they will bring them to an end.


¤

LARB Contributor

Rob Horning (@marginalutility) is an editor at The New Inquiry.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!