Beware of health-tech firms’ snake oil

By Leeza Osipenko

In an interview with The Wall Street Journal earlier this year, David Feinberg, the head of Google Health and a self-professed astrology buff, said: “If you believe me that all we are doing is organizing information to make it easier for your doctor, I’m going to get a little paternalistic here: I’m never going to let that get opted out.” In other words, patients will soon have no choice but to receive personalized clinical horoscopes based on their own medical histories and inferences drawn from a growing pool of patient records.
But even if we want such a world, we should take a hard look at what today’s health-tech proponents are really selling.
How true is promise to cut medical costs?
In recent years, most of the US Big Tech companies-along with many start-ups, the big pharmaceutical companies and others-have entered the health-tech sector. With big data analytics, artificial intelligence (AI) and other novel methods, they promise to cut costs for struggling healthcare systems, revolutionize how doctors make medical decisions, and save us from ourselves. What could possibly go wrong? Quite a lot, it turns out. In Weapons of Mass Destruction, data scientist Cathy O’Neil lists many examples of how algorithms and data can fail us in unsuspecting ways. When transparent data-feedback algorithms were applied to baseball, they worked better than expected; but when similar models are used in finance, insurance, law enforcement and education, they can be highly discriminatory and destructive.
Healthcare is no exception. Individuals’ medical data are susceptible to subjective clinical decision-making, medical errors and evolving practices, and the quality of larger data sets is often diminished by missing records, measurement errors, and a lack of structure and standardization.
Nonetheless, the big data revolution in healthcare is being sold as if these troubling limitations did not exist. Worse, many medical decision-makers are falling for the hype.
No infrastructure to gather evidence
One could argue that as long as new solutions offer some benefits, they are worth it. But we cannot really know whether data analytics and AI actually do improve on the status quo without large, well-designed empirical studies. Not only is such evidence lacking; there is no infrastructure or regulatory framework in place to generate it. Big-data applications are simply being introduced into healthcare settings as if they were harmless or unquestionably beneficial.
Consider Project Nightingale, a private data-sharing arrangement between Google Health and Ascension, a massive nonprofit health system in the United States. When The Wall Street Journal first reported on this secret relationship last November, it triggered a scandal over concerns about patient data and privacy. Worse, as Feinberg openly admitted to the same newspaper just two months later, “We didn’t know what we were doing.”
Given that the Big Tech companies have no experience in healthcare, such admissions should come as no surprise, despite the attempts to reassure us otherwise. Worse, at a time when individual privacy is becoming more of a luxury than a right, the algorithms that are increasingly ruling our lives are becoming inaccessible black boxes, shielded from public or regulatory scrutiny to protect corporate interests. And in the case of healthcare, algorithmic diagnostic and decision models sometimes return results that doctors themselves do not understand.
Unethical and poorly informed
Although many of those pouring into the health-tech arena are well intentioned, the industry’s current approach is fundamentally unethical and poorly informed. No one objects to improving health care with technology. But before rushing into partnerships with tech companies, healthcare executives and providers need to improve their understanding of the healthtech field.
For starters, it is critical to remember that big data inferences are gleaned through statistics and mathematics, which demand their own form of literacy. When an algorithm detects “causality” or some other association signal, that information can be valuable for conducting further hypothesis-driven investigations. But when it comes to actual decision-making, mathematically driven predictive models are only as reliable as the data being fed into them. And since their fundamental assumptions are based on what is already known, they offer a view of the past and the present, not the future. Such applications have far-reaching potential to improve healthcare and cut costs; but those gains are not guaranteed.
Another critical area is AI, which requires both its own architecture-that is, the rules and basic logic that determine how the system operates-and access to massive amounts of potentially sensitive data.
– The Daily Mail-China Daily News exchange item