The Death Predictor: A Helpful New Tool or an Ethical Morass?

A senior in hospice care.

(© bilderstoeckchen/Fotolia)


Whenever Eric Karl Oermann has to tell a patient about a terrible prognosis, their first question is always: "how long do I have?" Oermann would like to offer a precise answer, to provide some certainty and help guide treatment. But although he's one of the country's foremost experts in medical artificial intelligence, Oermann is still dependent on a computer algorithm that's often wrong.

Doctors are notoriously terrible at guessing how long their patients will live.

Artificial intelligence, now often called deep learning or neural networks, has radically transformed language and image processing. It's allowed computers to play chess better than the world's grand masters and outwit the best Jeopardy players. But it still can't precisely tell a doctor how long a patient has left – or how to help that person live longer.

Someday, researchers predict, computers will be able to watch a video of a patient to determine their health status. Doctors will no longer have to spend hours inputting data into medical records. And computers will do a better job than specialists at identifying tiny tumors, impending crises, and, yes, figuring out how long the patient has to live. Oermann, a neurosurgeon at Mount Sinai, says all that technology will allow doctors to spend more time doing what they do best: talking with their patients. "I want to see more deep learning and computers in a clinical setting," he says, "so there can be more human interaction." But those days are still at least three to five years off, Oermann and other researchers say.

Doctors are notoriously terrible at guessing how long their patients will live, says Nigam Shah, an associate professor at Stanford University and assistant director of the school's Center for Biomedical Informatics Research. Doctors don't want to believe that their patient – whom they've come to like – will die. "Doctors over-estimate survival many-fold," Shah says. "How do you go into work, in say, oncology, and not be delusionally optimistic? You have to be."

But patients near the end of life will get better treatment – and even live longer – if they are overseen by hospice or palliative care, research shows. So, instead of relying on human bias to select those whose lives are nearing their end, Shah and his colleagues showed that they could use a deep learning algorithm based on medical records to flag incoming patients with a life expectancy of three months to a year. They use that data to indicate who might need palliative care. Then, the palliative care team can reach out to treating physicians proactively, instead of relying on their referrals or taking the time to read extensive medical charts.

But, although the system works well, Shah isn't yet sure if such indicators actually get the appropriate patients into palliative care. He's recently partnered with a palliative care doctor to run a gold-standard clinical trial to test whether patients who are flagged by this algorithm are indeed a better match for palliative care.

"What is effective from a health system perspective might not be effective from a treating physician's perspective and might not be effective from the patient's perspective," Shah notes. "I don't have a good way to guess everybody's reaction without actually studying it." Whether palliative care is appropriate, for instance, depends on more than just the patient's health status. "If the patient's not ready, the family's not ready and the doctor's not ready, then you're just banging your head against the wall," Shah says. "Given limited capacity, it's a waste of resources" to put that person in palliative care.

The algorithm isn't perfect, but "on balance, it leads to better decisions more often."

Alexander Smith and Sei Lee, both palliative care doctors, work together at the University of California, San Francisco, to develop predictions for patients who come to the hospital with a complicated prognosis or a history of decline. Their algorithm, they say, helps decide if this patient's problems – which might include diabetes, heart disease, a slow-growing cancer, and memory issues – make them eligible for hospice. The algorithm isn't perfect, they both agree, but "on balance, it leads to better decisions more often," Smith says.

Bethany Percha, an assistant professor at Mount Sinai, says that an algorithm may tell doctors that their patient is trending downward, but it doesn't do anything to change that trajectory. "Even if you can predict something, what can you do about it?" Algorithms may be able to offer treatment suggestions – but not what specific actions will alter a patient's future, says Percha, also the chief technology officer of Precise Health Enterprise, a product development group within Mount Sinai. And the algorithms remain challenging to develop. Electronic medical records may be great at her hospital, but if the patient dies at a different one, her system won't know. If she wants to be certain a patient has died, she has to merge social security records of death with her system's medical records – a time-consuming and cumbersome process.

An algorithm that learns from biased data will be biased, Shah says. Patients who are poor or African American historically have had worse health outcomes. If researchers train an algorithm on data that includes those biases, they get baked into the algorithms, which can then lead to a self-fulfilling prophesy. Smith and Lee say they've taken race out of their algorithms to avoid this bias.

Age is even trickier. There's no question that someone's risk of illness and death goes up with age. But an 85-year-old who breaks a hip running a marathon should probably be treated very differently than an 85-year-old who breaks a hip trying to get out of a chair in a dementia care unit. That's why the doctor can never be taken out of the equation, Shah says. Human judgment will always be required in medical care and an algorithm should never be followed blindly, he says.

Experts say that the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully.

Researchers are also concerned that their algorithms will be used to ration care, or that insurance companies will use their data to justify a rate increase. If an algorithm predicts a patient is going to end up back in the hospital soon, "who's benefitting from knowing a patient is going to be readmitted? Probably the insurance company," Percha says.

Still, Percha and others say, the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully. "These are new and exciting tools that have a lot of potential uses. We need to be conscious about how to use them going forward, but it doesn't mean we shouldn't go down this road," she says. "I think the potential benefits outweigh the risks, especially because we've barely scratched the surface of what big data can do right now."

Karen Weintraub
Karen Weintraub, an independent health and science journalist, writes regularly for The New York Times, The Washington Post, Scientific American and other news outlets. She also teaches journalism at Boston University, MIT and the Harvard Extension School, and she's writing a book about the history of Cambridge, MA, where she lives with her husband and two daughters.
Get our top stories twice a month
Follow us on

Kidney transplant patient Robert Waddell, center, with his wife and children after being off immunosuppresants; photo aken last summer in Perdido Key, FL. Left to right: Christian, Bailey, Rob, Karen (wife), Robby and Casey.

Photo courtesy Rob Waddell

Rob Waddell dreaded getting a kidney transplant. He suffers from a genetic condition called polycystic kidney disease that causes the uncontrolled growth of cysts that gradually choke off kidney function. The inherited defect has haunted his family for generations, killing his great grandmother, grandmother, and numerous cousins, aunts and uncles.

But he saw how difficult it was for his mother and sister, who also suffer from this condition, to live with the side effects of the drugs they needed to take to prevent organ rejection, which can cause diabetes, high blood pressure and cancer, and even kidney failure because of their toxicity. Many of his relatives followed the same course, says Waddell: "They were all on dialysis, then a transplant and ended up usually dying from cancers caused by the medications."

Keep Reading Keep Reading
Linda Marsa
Linda Marsa is a contributing editor at Discover, a former Los Angeles Times reporter and author of Fevered: Why a Hotter Planet Will Harm Our Health and How We Can Save Ourselves (Rodale, 2013), which the New York Times called “gripping to read.” Her work has been anthologized in The Best American Science Writing, and she has written for numerous publications, including Newsweek, U.S. News & World Report, Nautilus, Men’s Journal, Playboy, Pacific Standard and Aeon.

The White House in Washington, D.C.

Unsplash

This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.

We invited Nobel Prize, National Medal of Science, and Breakthrough Prize Laureates working in America to offer advice to the next President on how to prioritize science and medicine in the next four years. Almost universally, these 28 letters underscore the importance of government support for basic or fundamental research to fuel long-term solutions to challenges like infectious diseases, climate change, and environmental preservation.

Many of these scientists are immigrants to the United States and emphasize how they moved to this country for its educational and scientific opportunities, which recently have been threatened by changes in visa policies for students and researchers from overseas. Many respondents emphasize the importance of training opportunities for scientists from diverse backgrounds to ensure that America can continue to have one of the strongest, most creative scientific workforces in the world.

Keep Reading Keep Reading
Aaron F. Mertz
Aaron F. Mertz, Ph.D., is a biophysicist, science advocate, and the founding Director of the Aspen Institute Science & Society Program, launched in 2019 to help foster a diverse scientific workforce whose contributions extend beyond the laboratory and to generate greater public appreciation for science as a vital tool to address global challenges. He completed postdoctoral training in cell biology at Rockefeller University, a doctorate in physics at Yale University, a master’s degree in the history of science at the University of Oxford as a Rhodes Scholar, and a bachelor’s degree in physics at Washington University in St. Louis.