Death by Optimization

How healthcare apps and platforms distance us from care.

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!


SCROLLING THROUGH my mom’s medical files on the Kaiser Permanente platform, I found myself looking at a photo of her leg. Or maybe it was her arm? A bee sting? By the time I came across the image, I hadn’t seen her in two years, and she had been dead for two-thirds of that time.

When my mom was found deceased at her home in April 2024, she had not been ill. At least, she hadn’t told me or anyone else she was. I immediately called her doctor, thinking she must have had an illness she chose not to share. I remember how his voice sounded when I gave him the news of her death: he was as shocked as I was. He hadn’t seen her in person, he said, since October 2020—almost four years before. Yet there was his name emblazoned on her prescriptions (for her thyroid, cholesterol, and heart), all of them current and being mailed to her. I found out that the Kaiser staff had no idea what my mother weighed or what her blood work showed—and they hadn’t known for nearly four years. And yet everything looked as if it were seamlessly ticking along. This highly efficient system did not actually require my mother to commute a few miles down the road to be physically seen in person. In computer science parlance, the system was “optimizing” for the wrong thing.


This is a story about the platform systems layered over our healthcare systems. We now use apps to message our providers (it is almost impossible to call them directly), and pharmacy renewals happen automatically. These systems can give the appearance of activity and care happening at the press of a button, but the platformization of medical care has mostly served to place patients’ bodies at a distance from their caregivers. This enables doctors to be assigned ever more patients, and it enables the deaths of people like my mother—albeit inadvertently, I am sure. And yet, disregard for human bodies is not new in the story of mechanization. Platforms are software—apps—that transform our phones into pocket-size computers we can take anywhere and use wirelessly. Platforms are also the organizations behind this software, and they aim to profit from their creations. Apps allow action at a distance, which we desperately needed during the COVID-19 pandemic, and so these platforms quickly penetrated healthcare, education, dating, and so on. In the process, they transformed how we socialize, how we work, and now, with AI, how we think and reason.


In my research, I focus on how platforms “intermediate” between two nodes that heretofore interacted in a straightforward fashion. For example, when you read the printed newspaper, whatever is there on the page stays there until the next edition comes out. You and the paper (its writers and editors) are engaging in a stable exchange. By contrast, when you read a news feed that is constantly updated and tailored to your specific tastes, the nature of your experience changes. What you see is curated by a platform app that surveils your every move and knows how long you linger over a certain headline, what you click, and what you comment on. It develops a profile of “who you are.” You and the news are triangulated to the platform, which controls the interface through which you see the news and through which we see each other, in a continuous feedback loop.


Platforms fool you into thinking you’re engaging with a person or entity when you’re actually just engaging with a software application. What looks like a flurry of medical-related activity on my mom’s app—prescriptions renewed at the click of a button, dispensed by a pharmacy that is just a warehouse and arriving at her doorstep days later; messages sent and responded to by someone, maybe her doctor but more likely an aide who costs less to employ—happens behind a screen. This is not the “real thing.” The body itself is not showing up anywhere in this system. The platform is processing a bunch of patient-related stuff but not the patient herself.


That was a critical failure in my mom’s case. She was 140 pounds at autopsy; 30 or 40 or even 50 pounds under her normal weight. Any one of the medications so smoothly dispensed by the automated system—the statin, the thyroid hormone, the blood pressure medication—could have killed her by being miscalibrated to her weight and the conditions unfolding inside her body, destroying her kidney, liver, or heart. I am sure she did not know this; I didn’t know this until I became involved in the medical procedural of figuring out what happened. I am sure she thought that, by taking her medications, she was doing what she should be doing.


Here’s the problem. Apps play a game of show-and-tell as part of platforms’ intermediation of real activities. Important details can be lost when we’re not looking directly at data or bodies or events, only at the software’s representation of them. I knew this long before my mom died because, a decade ago, I studied Uber drivers in Boston. At that time, drivers were entirely “blind” to their destination; today, the app provides the illusion of choice—like a projected rate—that still hides the reality of deadhead miles: a “less good” ride where the time or distance to pick up outweighs the fare (because of costs associated with operating: gas, tires, and depreciation). This is even worse now—since the company has a lot of “big data” about drivers’ working patterns, it can target whether an individual driver is working “shift style” (a certain set of hours most days) or “income maximizing” (working until you hit an earnings target).


Platforms conceal important details from consumers as well—including, for example, the labor that goes into social media content. An emotionally moving video of a kid who is sad about a sick puppy doesn’t show the work that went into producing it—until the mom accidentally posts an unedited video revealing the multiple iterations of ordering her child to do precise things with his body that the production required in order to convey “sad.” Instacart gives multiple prices at the same location for the same item and regularly tests consumers’ price tolerances, purportedly in order to help stores set prices. But once you’ve discovered that different customers will pay different prices for the same item, why would a platform ever return to static pricing? Ride-“sharing” certainly won’t.


The bottom line is that the platform company in control of the interface is optimizing constantly for its aims, which are not transparent (Kaiser’s, for instance, may be to “move patient communications substantially to platform” and “maximize patient engagement”), while users of the platform are trying to optimize for something else: profit (Uber drivers), good health outcomes (patients). Regulation so far has not been directed at the interface at all (the algorithms are what change the interface), so the optimization target simply shifts. The user cannot “win” in such a system.


The Kaiser platform—inadvertently or not—made the choice not to optimize for my mother’s health. It disregarded her actual body. What was it optimizing for? Platforms measure their effectiveness in terms of user engagement and can often make money by selling that engagement to someone else, typically an advertiser. But even when monetization of eyeballs is not the aim, a platform’s unit of measurement tends to be clicks and views: for example, the educational platform I am required to use as a professor measures student engagement in terms of the number of times students open online documents I assign as reading. The platform suggests that time spent clicking on documents is a worthwhile metric of student performance. This is a hilariously thin view of student learning, but it’s not funny at all if I actually use it to assign grades.


Putting platforms everywhere, without thinking about the values and aims of the original activity they intermediate, thus presents a profound societal problem. Software designers are skilled at coding, not at being doctors, educators, or journalists. The very act of a platform’s intermediation places the two interacting nodes at a distance from each other: student from teacher, doctor from patient, friend from friend. Because this happens digitally, it is easy to miss. You might actually feel closer because the transactions are instantaneous. But the algorithmically curated feed of your social media connections, anticipating what you may want to see, is not the same thing as the actual posts (change your Facebook feed to “chronological” and see the difference). Being able to receive a digital photo of your patient’s arm is not the same as being able to look the patient in the eye. Clicking on a document is not the same as reading and understanding it.


Platforms are blasé about this discrepancy in activity and experience because they don’t really care about it: they exist to mediate human interaction. In legal filings, platforms invariably assert that they are transacting only in information (Uber is not providing transportation, Google is not a publisher, and eBay is not selling goods). Their measure of value is not the success of the underlying activity but user engagement. This is why AI agents pander to you, validating everything you say. They are designed to keep you involved, and so, unlike a good friend or a professional therapist, they will not assert the existence of something called reality if this will drive your disengagement. This is why the American Psychiatric Association is talking about AI-induced psychosis, of which there are now many recorded examples. It is also why Uber and Lyft externalize negative effects onto the communities they claim to serve—most notably by increasing traffic and congestion. And it is why social media companies have no real strategy for removing content produced by child labor, like the video of a boy coached on how to be sad about a sick dog.


How to get platforms concerned about the real activities and people they intermediate is a critical legal and technology-management issue of our time. In the absence of a structure that imposes human concerns on software, people will end up trapped in these systems that are optimized in actively harmful ways. My mother was one of those people—and she didn’t make it out alive.


¤


Featured image: Kaiser Permanente app logo from Apple App Store. Colors have been edited.

LARB Contributor

Hilary C. Robinson is an associate professor of law and sociology at Northeastern University.

Share

LARB Staff Recommendations

  • Medicine and Lucre

    While looking at three recent books, physician Luke Messac explains why the public has legitimate reasons to distrust our healthcare system.

  • Control Freaks

    Arjun S. Byju employs Emily C. Bloom’s “I Cannot Control Everything Forever: A Memoir of Motherhood, Science, and Art” to investigate how tools of empowerment set us up for disappointment.