Clinically Speaking

Artificial Intelligence and Machine Learning in Healthcare

Because your summertime reading should make you a little uncomfortable.

Author Image

By: Ben Locwin

Contributing Editor, Contract Pharma

We are now in the future—where artificial intelligence (AI) and machine learning (ML) are components of our overall healthcare strategy for humanity. This was at least the future as far as the past was concerned. Now we see the limitations of AI and ML and it’s our job to enhance both to make how we perceive our current visions of ‘the future’ (according to now) to be the most effective it can be for the identification and mitigation of diseases and disorders.

Artificial intelligence and machine learning
These aren’t synonymous terms, so rule #1 is to stop using them both interchangeably as though they are. Artificial intelligence is a broader term within which machine learning resides. Machine learning is a subset in which a computer program can be fed data which it incorporates into its algorithmic structure to make better and better predictions about future problems posed to it.

AI and machine learning are both best-suited to identify high-data situations that certainly could be solved by humans, but for which that solution or set of solutions would require the review and synthesis of thousands of pages (or more) worth of data, and would be prohibitive for people to actually do. A good current example of the use of AI (really ML) in healthcare is Google’s system to better detect lung cancer. According to the American Cancer Society, lung cancer is the current leading cause of cancer deaths for both men and women. Out of an estimated 228,000 new cases that will be identified this year, there will be an estimated 143,000 deaths. To address this need, machine learning is being trialed to help improve the current, ridiculously-low hit-rate, diagnostic technology better serve patients who may have lung cancer.

Here’s the background: Lung cancer screening currently involves low-dose CT scanning, and has a 96% false positive rate. On a lung CT scan, about one quarter of images reveal shadowing features which are consistent with nodules in the lung. Of those results, however, fewer than 4% of patients actually have lung cancer. This creates unnecessary anxiety, unneeded further testing—likely more invasively and not risk-free—and a degradation in patient global health status. When putting computation to the test, the machine learning that was used in the study was able to correctly identify every actual positive case of cancer, and in one of the larger tests, the Google machine learning protocol outperformed 6 radiologists reading the same results by identifying 5% more actual cancer cases. At the same time, the algorithm rejected about 11% of the false positives—a previous study published in the journal Thorax found that false positives were reduced by about 30%. This is sparing the patients from undergoing further unnecessary medical procedures, some of which have resulted in death even though the patient actually had no cancer.

If you take a look at Figure 1 you’ll see that machine learning is nested within artificial intelligence as a subset, and deep learning is itself a subtype of machine learning. If you’ve missed out on the happenings of artificial intelligence over the past thirty or so years, you may have not noticed that ‘deep learning’ was a thing. Deep learning refers to a type of machine learning that occurs across networks, and is one of the reasons that IBM codenamed one of their learning machines ‘Deep Blue,’ which it pitted against world chess grandmaster Garry Kasparov in a series of high-tension chess matches in 1996-1997; the old human-versus-computer showdown writ large. Spoiler alert: IBM’s supercomputer won. On this topic of machines taking over the world at the expense of people, some vocal pundits have taken their opinions of AI and ML to the media to make outrageous claims that AI is going to destroy humanity if we aren’t careful. The reality is far more prosaic and it’s probabilistically very unlikely that it will pose a direct threat to us. It’s unfortunate, but the people who give public opinions about AI or machine learning are spectacularly uninformed.


Figure 1: Artificial Intelligence, Machine Learning, Deep Learning

Using AI and ML for pharma applications
Big data analytics, in silico drug design and molecular modeling, and trial risk prediction—a subset of predictive analytics—are all right within the sweet spot of machine learning for pharmaceutical development and deployment. Previously, I’ve worked on a publication which included risk assessment and in silico development tools (referenced below) and it offered absolute cutting-edge approaches to some of these challenges by some of the most luminary thinkers in different aspects of the field. The future only gets brighter and brighter for these applications, and there is only one thing standing in the way that I see which stands out among all other challenges: Algorithmic opacity.

Tending to the details of AI and ML (and DL)
We do need to understand better what some of the algorithms are actually doing, when they start assembling learning strategies on their own. This is what I mean by algorithmic opacity. For example, recent cosmological modeling of computers done with the help of a type of machine learning led to very accurate results of a virtual version of the cosmos in an incredibly short amount of time. When the programmers and researchers tried to reverse-engineer what exactly was the computational path that led to such accurate results (and so quickly), what they quickly found was that they had no idea what the algorithmic process was. Maybe the principle of parsimony was involved in the calculations. Maybe not.

It can be deviously difficult to ferret-out what the decision-making process was which led an in silico model to return one favorable alternative instead of another. Sometimes impossible. And this is a HUGE extant problem for us. If you haven’t spent a sleepless night over this yet, perhaps plan on not sleeping tonight as you think through the implications. For a highly-regulated industry, not knowing how a decision was reached which used available data is very problematic.

Also, as these technologies are rolled out to greater and greater effect clinically, not knowing the reason behind particular decisions is very troubling. This requires unwinding certain decisions post hoc and a posteriori; which is philosophically almost bordering on anti-science. Add to this that for the European markets, GDPR requires “meaningful information of the logic involved” when automated decision-making takes place (Articles 13, 14 and 15). Helping patients will be threatened with being hamstrung by political motivations if we can’t reduce the algorithmic opacity. 

References
• American Cancer Society. (2019). Lung cancer. https://www.cancer.org/cancer/lung-cancer.html
• Ardila, D. et al. (2019). End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature Medicine, 25, 954-961.
• Zurdo, J. et al. (2015). Developability of Biotherapeutics: Computational Approaches. https://www.crcpress.com/Developability-of-Biotherapeutics-Computational-Approaches/Kumar-Kumar-Singh/p/book/9781482246131 


Ben Locwin, PhD, MBA, MS

Ben Locwin is a healthcare futurist and industry executive, having worked in pharmaceuticals, medical devices, and medical technologies to improve the balance of care available to patients. Says Dr. Locwin, “Every single patient that receives medical care is exposed to at least one drug therapy or device as part of the course of diagnosis and treatment. We have the direct opportunity and moral imperative to make these approaches better and safer for the future.”

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters