In 2016, venture capitalists invested $5 billion in startups involving artificial intelligence, representing a 40 percent increase from 2012. With hopes of securing a foothold in what promises to be a multi-billion dollar industry, some of the most influential companies in the world—including IBM, Apple, and Google—are pouring hundreds of millions of dollars into their AI research-and-development labs. Health care in particular has been a favourite target for these investments. Google’s research website states that “machine learning has dozens of possible application areas, but healthcare stands out as a remarkable opportunity to benefit people.” Last year, the tech giant channelled more than 30 percent of its annual venture-capital expenditure into medical technologies, forecasting a future in which computers and machines power our health care system.
As with any burgeoning industry, there have been gaffes along the way. In 2013, IBM announced a partnership with MD Anderson, a highly regarded cancer-care centre at the University of Texas in Houston. The partnership was meant to revolutionize cancer care by integrating real-world clinical oncology research with Watson, IBM’s flagship AI platform. IBM said its AI program was “designed to integrate the knowledge of MD Anderson’s clinicians and researchers, and to advance the cancer centre’s goal of treating patients with the most effective, safe and evidence-based standard of care available.” But by mid-2016, there were concerns that the project wasn’t meeting its goals. In less than four years, MD Anderson had spent more than $62 million (US) on the project, with little to show for it. By September 2016, the university had disbanded the project, ending what many had seen as a prototype for AI’s future in medicine.
Cautionary tales from industry aren’t the only threat to AI’s growth in the health care sector. The US Food and Drug Administration—which has struggled for years to keep up with thousands of applications for the approval of medical apps—recently launched a new unit dedicated to regulating the mercurial “digital-health” industry, which includes everything from simple medical apps that measure your pulse to complex industrial ventures that use machine learning to diagnose cancer. The FDA is now applying higher regulatory standards to any app or product that deals with the diagnosis or treatment of a serious medical condition. Health Canada has not yet followed suit. It still assesses all medical devices—regardless of their stated purpose or level of complexity—using the medical devices regulations in the Canadian Food and Drugs Act, which was last overhauled in 1998, a time when mobile medical technology was still in its infancy.
Despite funding challenges and regulatory pitfalls, there have been important breakthroughs in health care and AI. Not surprisingly, computer algorithms and machine-learning platforms are outperforming humans on tasks that involve sorting, calculating, and integrating information on the basis of quantifiable, discrete data units. In analyzing X-rays, pathology slides, and MRI scans, AI programs appear to have a distinct advantage over their human counterparts. But in the emergency room, where things change rapidly and quantitative data is often unavailable, AI is faring surprisingly well at helping doctors diagnose and treat diseases.
A group of researchers at the Geisinger Neuroscience Institute in Scranton, Pennsylvania, led by vascular neurologist Ramin Zand, spent the past two years developing and testing an AI program to help ER doctors identify when a patient is having a stroke. Zand and his team designed an artificial neural network, a type of machine-learning platform, composed of discrete units that share a vast array of connections—the same way that neurons share synapses in the brain. The network is capable of independently learning new tasks. Researchers gave the artificial neural network access to a database of 260 patients that had come to the ER with stroke-like symptoms, such as weakness, slurred speech, and confusion. Only half of the patients had actually suffered a stroke, while the other half had some other cause for their symptoms, such as low blood pressure, low blood sugar, or an infection. It was up to the artificial neural network to learn how to differentiate between the two. “What we found is that the computer can diagnose stroke in a patient more precisely compared to a group of trained paramedics,” says Zand.
When a stroke occurs, brain tissue is starved of oxygen and brain cells begin to die. The earlier blood supply is returned to the brain, the less brain tissue is lost. Early diagnosis can therefore have life or death implications. No one appreciates this more than Zand, who deals with the impacts of stroke on a daily basis. “The AI is literally reading the patient’s chart as they’re being assessed at triage or by paramedics,” he says. “It can do it in a fraction of the time it takes humans…[and] we’ll be putting the AI platform on a cloud system, so that no matter where they are, when an ER physician opens a patient’s chart, they can get an alert telling them that this patient may be having a stroke.”
Improving patient outcomes is the true test of any new diagnostic platform: if the new test helps doctors make the diagnosis but fails to actually improve patient outcomes, then it’s generally not worth the cost of replacing the old diagnostic technology. Over the next few years, Zand and his team will test their new AI platform in the field to see if it improves patient survival, with the hope of making it available to ER doctors working in major academic medical centres and remote rural hospitals.
There have already been successes introducing AI platforms in ER and critical-care settings. In 2014, Suchi Saria, a researcher at Johns Hopkins University in Baltimore, designed an AI platform to help diagnose sepsis, a potentially life-threatening condition in which the body mounts a systemic inflammatory response to an infectious organism, such as a virus or bacteria. Sepsis affects tens of millions of people across the globe every year, and severe cases are associated with a mortality rate of up to 30 percent. Traditionally, health care workers had to combine a number of clinical parameters—heart rate, temperature, blood pressure, white blood cell count, and carbon dioxide level—to generate a numerical score that reflected a patient’s likelihood of having sepsis. But for a physician or nurse to calculate a patient’s score, they need to be thinking of sepsis as a possible diagnosis in the first place. If they don’t consider the possibility, then they can’t test for it.
Saria and her team have shown that doctors don’t need to generate the score themselves: they can off-load that responsibility to an AI program capable of simultaneously monitoring sepsis scores for multiple patients in real time, thereby avoiding delays in diagnosis and treatment. Saria’s AI program had an 83 percent success rate at identifying sepsis before the patient went into shock, compared to a success rate of 74 percent for calculations done by humans. This technology, inconceivable a decade ago, has the potential to save millions of lives across the globe annually.
AI is proving itself to be an ally when it comes to identifying stroke and sepsis, but it will also change the way that emergency rooms are organized, staffed, and administered. A group of researchers, engineers, and physicians at St. Michael’s Hospital in Toronto are combining their talents to develop a real-time “weather forecast for the emergency room,” as Simon Kingsley, an ER doctor involved in the project, put it. The group is developing a tool that will predict when an ER will become busy, how many patients are likely to arrive in the next few hours, and how many ER staff will be needed.
“In healthcare, we basically have four resources that we pay for: we pay for doctors, we pay for nurses, we pay for hospital beds, and we pay for investigations,” says Kingsley. The challenge is to limit the number of unnecessary resources being used at any given time. “Knowing ahead of time if the emerg is going to be busy or not will help us decide how much staff we’ll need, or whether or not we’ll need an ultrasound tech for a few extra hours,” he says. “If you could somehow power down some of these costs, even for six hours or twelve hours, it would have massive implications for cost savings.”
By integrating information from local law enforcement, traffic patterns, seasonal changes, weather, historical information, and an array of other metrics the program could help to inform staffing requirements in the ER and to improve the efficiency of our hospitals. “I don’t doubt that AI is going to be better than us at what we do,” Kingsley says. “They’re not going to be subject to fatigue, or confirmation bias, or any of the other fifteen things that clutter my brain when I’m trying to think. So ultimately, they will win, but it will still take a while.”
The state of hospitals twenty years from now is uncertain. A typical ER will probably still resemble the ones we’re used to—the familiar din of patient monitors, muffled conversations behind closed curtains, perhaps even the frustratingly long wait times—but somewhere in the depths of the hospital, there’s a good chance an intelligent machine will be combing through your entire medical history and finalizing a diagnosis and treatment plan before anyone even lays a hand on you. “Developments in AI are exciting,” Zand says, “but they’re not going to replace doctors. They’re going to be extra hands, extra help. Our goal is not to replace that human element, our goal is to give an extra hand where it’s needed.”