Will Robots Save Us From Medical Errors? It’s Complicated

LIKE HER PARENTS, who are my neighbors, Lara unabashedly wears athletic socks with her sandals. A precocious 11-year-old, she just as unabashedly professes that she wants to be a doctor when she grows up. She’s been working on a “big project” for school about the past, present, and future of medicine, and part of the assignment includes an interview with a practicing doctor. I fit the bill, which is why she’s sitting on my living room floor, a spiral notebook open on her lap, as she runs through a list of questions about doctoring.

Having just finished reading about the medical practices of ancient Egypt, she regales me with stories about some of the “crazy things” Egyptian healers did to help the sick, such as preparing medicines out of animal dung and performing amputations with stone blades. “I’m sure they had the best intentions,” I proffer, somewhat lamely. Then I add, “If someone, 5,000 years from now, looked at the way I practice medicine, they might say that some of the things I do are crazy. They’d probably point out mistakes I make every day. But, like the ancient Egyptian doctor, I’m doing what I think is best for my patient.” “What about robots?” Lara asks next, peeking down at her notebook. The question seems like a non sequitur until she clarifies with a follow-up. “Do you think in the future all doctors will be robots?” She doesn’t add another question on top of that, but the implied follow-up is clear: will that make for a new era of better medicine?

Lara posed this question at a particularly important time, just after the publication of a brief communication in The BMJ (formerly the British Medical Journal) by Drs. Martin A. Makary and Michael Daniel, who shine a spotlight on patient safety. Makary is the developer of the operating room checklist (the precursor to the World Health Organization’s surgery checklist immortalized in Atul Gawande’s The Checklist Manifesto [2009]), and Daniel is a patient safety research fellow, both at Johns Hopkins University School of Medicine. Shorter than most editorials, their two-page letter argues that medical errors should be considered the third leading cause of death, responsible for over 250,000 mortalities per year and trailing only heart disease and cancer. If this is true, then maybe robots will save us from ourselves.

There are some important caveats to consider, however. For one thing, the title of their BMJ publication, “Medical error — the third leading cause of death in the US,” seems to have been tailor made for mainstream publications, social media, and news feeds, which duly regurgitated the numbers with well-calibrated expressions of shock and dismay. To be sure, there is substance to their report, and indeed what makes their analysis alarming from a historical perspective is that, in 1999, the Institute of Medicine issued a landmark report, To Err Is Human: Building a Safer Health System, which estimated an annual incidence of up to 98,000 deaths due to medical error. That report initiated a host of quality improvement measures across US hospitals. But now, 17 years later, our healthcare system appears to be performing far worse. When multiple news outlets trumpet headlines like “Are Medical Errors Deadlier Than Strokes and Alzheimer’s?” (The Atlantic) and “Medical Errors Have Become So Common That They Are Now a Leading Cause of Death” (Slate), it is certainly reasonable to wonder whether a drastic overhaul of the workforce — fewer humans, more robots — might provide relief.

When Lara asked me if I thought all doctors would be robots in the future, I told her that robots were already being used in surgeries, but more so as technical assistants than as autonomous healers. Computers and electronic medical records have replaced paper charts spilling out of filing cabinets, and doctors now use their smart phones far more often than their stethoscopes. Part of this computerization has indeed been for the explicit purpose of reducing human errors. Many electronic medical records, for example, have alerts for clinicians based on a patient’s diagnostic codes and/or lab values. “Should this patient be on prophylactic heparin?” a pop-up window will ask an admitting physician if he or she has not already ordered prophylaxis for hospital-associated deep vein thrombosis. My prescribing software will not allow me to order a new medication if I haven’t reviewed a patient’s allergies. Yet, if we are already approaching Lara’s implicit ideal of a robotic doctor, why are medical errors apparently on the rise in an era of ever-increasing computerization?


Chuck Klosterman, arguably our greatest living pop culture critic, has just released his latest book, But What If We’re Wrong? Like Lara, Klosterman queries the past, present, and future of a variety of human pursuits — literature, popular music, artificial intelligence, pro football, dream science — and tries to sort out what will stand the test of time and what will be ridiculed in some future fifth grader’s class project. Early in his book, Klosterman introduces what he cheekily calls “Klosterman’s Razor: the philosophical belief that the best hypothesis is the one that reflexively accepts its potential wrongness to begin with.” True criticism of the present requires us to consider that our current beliefs are most likely flawed. Only this kind of open-mindedness can allow us to envision a future in which we watch three-hour video games of simulated, CGI-produced football or, perhaps, one in which no one finds Seinfeld funny. Any scientist will tell you that this razor is not proprietary to Klosterman, as no experiment is worth pursuing unless the null hypothesis is potentially valid.

Klosterman’s Razor, whether applied to the 1999 Institute of Medicine report or the new BMJ publication, might argue that a quantification of deaths due to medical errors is laden with pitfalls, only some of which have to do with accounting. For instance, in preparing their BMJ publication, Makary and Daniel culled data from four studies on US death rates from medical errors published after To Err Is Human, the largest of which included 37,000,000 Medicare patient admissions from 2000–2002, to arrive at what they believe is a more accurate estimate of medical care as the cause of death. Their analysis, however, conflates true errors (e.g., dispensing an incorrect medication) with complications (e.g., a severed blood vessel during surgery). I’m not trying to be an apologist for doctors, but there’s an ocean of difference between errors and complications. By not recognizing this difference, Makary and Daniel put far too much stock in what we doctors can actually do, be it good or bad.

In her book On Immunity (2014), Eula Biss quotes her physician-father’s joke about a two-sentence textbook for doctors: “Most problems will get better if left alone. Those problems that do not get better if left alone are likely to kill the patient no matter what you do.” I’ve taken a photo of that quote, and I show it to medical students or residents who blame themselves for a patient’s poor outcome. My application of Klosterman’s Razor to the medical error debate is a more serious version of Biss’s father’s joke. To put it bluntly, I’m arguing that the potential miscalculation we’re making is not in how we count the number of deaths but rather in our misguided belief that most (or even any) deaths are truly preventable. Computers and robots won’t make them preventable either, but I wonder if it will take a future of robotic doctors and surgeons, posting similar mortality rates, before we admit this.


When I took my nephew to see his favorite comedian, Demetri Martin, one of the loudest laughs of the night came when Martin expressed disbelief that people still died of surgical complications. “These are not new surgeries,” he said, “so I’d expect that by this point they shouldn’t be complicated.” Laughter. “I’m sorry, but your husband died on the operating table. The surgery was complicated.” Laughter. “Complicated? Yes, complicated. The doctor ate something bad for lunch, the nurse was in a grumpy mood, there was a creepy medical student who was observing and he gave everyone the jitters, so, yes, it was complicated.” The word “complication” does a disservice to doctors because it implies that the right mind could have solved the problem. I can’t think of a better alternative to complication, though. The most accurate term would incorporate the concept of luck — specifically, bad luck.

The role of luck — of bad luck, but also of good luck — is one of the secrets of medicine, shared by both patients (who want to believe that their doctors are immune to the whims of fortune) and doctors (who want to believe the exact same thing). Yet, in intimate settings, doctors will often trade stories with each other that highlight the limitations of what they can do for patients, be it good or bad. I now always wear brown shoes when I do a kidney biopsy because my two worst bleeds occurred on days when I wore black shoes. My fellows know this, but my patients don’t. A nephrologist from Paraguay recently emailed me asking for a second opinion on a case. Before describing the case, he put in a caveat: “Some of the stuff I’m about to tell you will sound like a horror story to you, and it’s really as third worldy as it can get!” Indeed, a number of crucial tests and treatments were ignored in the patient’s care. Interestingly, though, the outcome of the case was excellent. “Don’t do anything” was the essence of the second opinion I emailed back to Paraguay. Despite the “horror story” of mismanagement, the patient recovered with perfect kidney function.


True medical errors — operating on the wrong side of the body, dispensing a medication for Patient X that was prescribed for Patient Y, failing to recognize an important test result — must be recognized, accounted for, and acted upon systematically to prevent future mistakes. Computers will help in this effort, with hospitals utilizing scannable patient ID bracelets and pop-up alerts on the electronic medical record to reduce such errors. And so, too, will education. Indeed, healthcare workers are now learning, from the very start of their training, to recognize the unintended consequences of medical interventions. When medical and nursing students learn how to formulate a differential diagnosis, they are repeatedly reminded to include drug toxicity as an etiology for practically any sign or symptom. One of my professors in medical school, when teaching the differential diagnosis, urged me and my classmates to start with an easy acronym: F.T.D. (like the florist). “First Think Drugs,” that professor warned us, “and always remember that every medication we prescribe for a patient is, in its essence, a nonnatural substance, a potential poison.”

Complications, though, are an entirely different matter. Like missed free throws in the NBA Finals and wild pitches in the World Series, some errors are an inherent part of the game. The more you play, the more errors of this kind you’ll make, by sheer probability. For example, the most frequent and severe complications of kidney biopsies in my division befall those doctors who do the highest volume of procedures and who also, because of their high volumes, happen to be the most skilled at doing such biopsies. Fortunately, in medicine, complications tend to be rare events and do not necessarily influence a patient’s ultimate outcome.

Makary and Daniels recommend that death certificates contain an extra field asking whether a “preventable complication stemming from the patient’s medical care contributed to the death.” My problem with this recommendation is the inherent subjectivity of the terms “preventable” and “contributed.” Their BMJ correspondence provides a case vignette intended to illustrate the role of medical error in a patient’s death. The case features a young woman with a heart transplant who dies of a bleeding complication after a diagnostic pericardiocentesis (the aspiration of fluid from the space that surrounds the heart). The needle inserted in that procedure grazed the liver, causing a pseudoaneurysm that eventually ruptured. She died from hemorrhage and subsequent cardiac arrest. I’d argue that the only way this death could have been prevented would be never to have done the procedure in the first place. Her underlying health contributed as much to her death as the procedure’s complication. I, of course, am also using “prevent” and “contribute” subjectively.


Klosterman describes the last “monster shift in science,” the Copernican Revolution (the belief that the Earth rotates around the Sun rather than vice versa), as

invisible to the vast majority of the planet. Granted, a revolution within our accelerated culture would happen far faster. The amount of human information exchanged is exponentially different, as is the overall level of literacy. But that still doesn’t mean a transformative period would be transparent to the people actually experiencing it.

In some ways, the assumption behind Lara’s question is spot on: the replacement of doctors with robots and computers is already underway.

Joel DeCastro, a urologist at my hospital, regularly uses robotic assistance in the operating room when he removes a prostate, bladder, or kidney in a patient with cancer in one of these organs. But, when I asked about how robots had changed his practice, he argued that “the term robotic may give people the wrong impression.” “This is just like laparoscopic surgery, but instead of me holding the instruments, robotic arms are holding them, and I control the robotic arms while sitting at a console several feet away.” When I asked if he envisioned a future in which the robotic arms might have some, or even complete, autonomy in their operating motions, he reiterated that “the entire operation is human controlled.” The robotic arms allow for surgeries to be performed through incisions that are significantly smaller than a human arm could penetrate, which translates into less immediate postoperative pain and shorter hospital stays. Robotic surgery is also associated with less blood loss, but has not yet consistently shown a reduction in postoperative infections, although DeCastro thinks the data will eventually show this, too. Nonetheless, even with robots, there are complications, because, even with robots, there is human involvement.


“I think the human element will always be present,” I eventually say to Lara in response to her question, aware that my words will be used for a fifth-grade project. And that may be a good thing. After all, as I tell her, patients want to have a conversation with their doctors. They want the doctor to listen to them, and they want to listen to what the doctor has to say. A robot could prescribe the right medicine or perform a surgery well, but I don’t think a robot could explain to a patient why the medicine or the surgery was good for that particular patient at that particular time in his or her particular life.

Although I believe wholeheartedly in what I’m telling Lara, I refrain from adding something else that I believe with equal conviction. The most obvious difference between me and a robot doctor, following a pre-specified diagnostic and treatment algorithm, is that I can go off script. I can make a risky decision. I can recommend an intervention that’s not evidence-based. I can advise stopping a treatment that another doctor felt was entirely appropriate. As long as these non-algorithmic actions are performed with the patient’s best interest in mind, then who’s to say (other than a robot) that I’m wrong?

I keep going back to Eula Biss’s father’s two-sentence textbook for physicians, a pithier summation of my own inherent belief that my patients will do well or poorly regardless of what I do. My actions only hasten that outcome. This belief is why I often question whether I should have become a doctor. The physicians who’ve mentored me always seemed to view themselves as irreplaceable players in their patients’ health. I don’t share this view.

At the end of his chapter on American politics in But What If We’re Wrong?, Klosterman writes,

The ultimate failure of the United States will probably not derive from the problems we see or the conflicts we wage. It will more likely derive from our uncompromising belief in the things we consider unimpeachable and idealized and beautiful. Because every strength is a weakness, if given enough time.

The conviction that we can perfect the art of medicine may become our weakness. If Lara’s prediction about robots replacing humans as doctors turns out to be true, these lines from Klosterman will be a fitting eulogy for our healthcare system. Go back and read them, adding the word “medical system” after United States, and then be grateful for those doctors who are willing to accept their own limitations, their own complications, their own errors.


Andrew Bomback is a physician and writer in New York. His essays have recently appeared in The Millions, Vol. 1 Brooklyn, Ohio Edit, Hobart, Entropy, Human Parts, and Essay Daily.