Clinically Speaking

Probing the Limits of Pharma’s Future

A retrospective on the past 20 years and a prospective on where the future will take us.

Author Image

By: Ben Locwin

Contributing Editor, Contract Pharma

Where have we tread? There has been, as you know, tremendous change in the industry of pharmaceutical medicine over the past 20 years. This has come in the form of increased growth and refinement of biologics, where 20 years ago the segment representing monoclonal antibody treatments was incredibly nascent. To have imagined 20 years ago that several of our industry’s Top 10-selling therapies would be biologics would have been considered predictive overreach.

Within small molecules, much more refined molecular screening approaches have taken root, as our ability to resolve multidimensional aspects of complex chemistry has improved, and this has led to much more precisely-targeting treatments—approaching a more precision-medicine paradigm—as well as digging deeper into the dark corners of under-served diseases and orphan designations. This has meant that hundreds of thousands of discrete patients have been helped in the past two decades by the expansion into rarer indications.

We have cell therapies and gene therapies, which were indeed predicted and researched over 20 years ago, but for which we had insufficient means to investigate these approaches and scale them up into a commercially-viable treatment option. These two new avenues of treatment have dominated the news cycles over the past couple years, but the development and clinical trials have in reality been ongoing for over a decade.

And so it goes, we live and work within an industry which asks so much of us to pioneer the future, but from which we don’t get public exposure for many years. And this is not an admonishment about the current state of our industry, but I think an important point that all of the employees who work within pharmaceuticals are therefore absolutely NOT doing it for immediate gratification. Once our newest frontiers of treatment reach the market, we often aren’t interested in fanfare, because we’re already several years into the next approaches which will define the subsequent treatment epoch.

What we thought would be
If we don’t look with clear eyes at where we’ve stumbled, we’re not being honest with ourselves, nor in a position to improve our models to further ensure a better future for the industry and the treatment of complex diseases. To this end, we’ve had some misses, for example within personalized medicine. We’ve come to understand recently that a host of genes that we have assumed for many years were essential for tumor survival, actually aren’t. This should be considered a scientific win; for when we are able to collect contemporary data which overthrow previous convictions, and change our opinions accordingly, we are embodying the scientific method at its very heart. Our hypotheses are only as good as their most recently-survived falsifications. 

“For when we are able to collect contemporary data which overthrow previous convictions – and change our opinions accordingly – we are embodying the scientific method at its very heart.”

There are some who also look at where we stand with genetic analyses and think that we’ve missed fertile ground or opportunities haven’t been as newsworthy as they might have been. There is still much to do and much to learn in this field, and we continue to learn more and refine our thinking (more on this below). However, an issue that needs attention, but has received virtually none to date, is that of attributes of twin studies and over-extension of findings. This is due in large part to multidimensionality of the analyses needed and, as Nassim Nicholas Taleb conjectures, the influence dominance of convexity.

In 2003, by the way, we thought the respiratory illness SARS would be an annually-recurring pandemic. And because a 20-year retrospective does indeed envelop January 1, 2000, remember that a not-insignificant proportion of everybody though all electronics were going to shut down for Y2K.

But we seem to think that crazy ideas don’t seem so crazy after a while. This is called The Overton Window. However, the OW itself was just a specification of the psychology of normalization of group ideas, similar to the negative aspects of groupthink.

Data are never perfect, but our approach to research can always improve
Waiting for better-and-better data to come from our experiments (data quality approaching infinity) has diminishing returns. But until we have very well-refined models, we can, as a result of our ignorance, commit more errors in deploying our scientific ideas in a practical sense. This can lead to unintended effects like leading researchers to unleash millions of genetically-modified mosquitos and presumed-sterile mosquitos to Jacobino, Brasil, to abate the spread of mosquito-vectored diseases like Zika and Dengue.

The word “presumed” is the trouble with the phrase in the previous sentence. Instead, what has been found is that after a period of mosquito population decrease, which was the desired effect, the indigenous mosquitos have rebounded and are indeed interbreeding inexplicably with the genetically-modified mosquitos—the unexpected and totally undesired effect. This is strikingly similar to Dr. Ian Malcolm’s quote from Jurassic Park: “Life finds a way.”*

Yale University researcher Jeffrey Powell, who was part of recent reviews of the situation said, “It is the unanticipated outcome that is concerning.” Another newsworthy event hiding behind the principle of: ‘we couldn’t have predicted it.’ But could we have? If the research continued on to greater depth—prior to introduction of Aedes aegypti to the wild—to include other potential factors, clearly the interbreeding could have been predicted, experimentally-sampled, and included in the risk:benefit analysis.


A human melanoma cell during division. Image by Paul J. Smit, Rachel Errington/Wellcome.

A similar finding in our depth of belief and conviction in ideas we’ve held can be evidenced by recent experimental results showing that genes we thought were essential to oncogenesis really aren’t. Here’s the background: By interrogating what DNA is essential to oncogenesis and maintenance of cancer cells provides a treasure trove of information for drug developers to transform into new drug targets. If you were to look meta-analytically at hundreds of studies which pointed to certain proteins that seemingly needed to be present for cancer cell survival, you might think you had exactly the set of targets for your next-generational armamentarium. But you would be incorrect.

This investigative work was done, in part by Jason Sheltzer and his team, on chemotherapeutic targets that appeared time and again in almost 200 research studies. After employing CRISPR genome editing to eliminate the genetic sources of these assumed-to-be-critical-proteins (the growth protein MELK), the cancer cells continued to persist. This is important information, because null results are just as important, and in some cases, insidiously more so, as positive results. They tell us about gaps in our knowledge and underlying hypotheses.

In totality, CRISPR modification and 6 target proteins found with RNAi methods for oncology treatment were compared against 10 drugs which were intended to target these discrete molecular targets. It appears to be that how the cancer treatments are actually functioning to destroy tumor cells isn’t what was presumed. 

And what of the future?
I’d be remiss in talking of the future without mentioning artificial intelligence (AI), machine learning (ML), deep learning (DL), and quantum computing. All of these methods are in-use at various scales. We will indeed see more-and-more utility with AI in pharma, and more valuably, by harnessing the subsets within AI of machine learning and deep learning for pharma clinical data. These neural networks and algorithms can already outperform skilled, trained humans for highly-specified and -specialized tasks. For example, I’ve written before (Contract Pharma, July/Aug 2019) on the breakthroughs in these computer-based technologies in outperforming radiologists. In fact, deep learning has just been used to better visualize mechanisms within cells, in this case, mitochondria dynamics.


Still frames of intercellular dynamics, with the right-side image enhanced by deep learning software. Image from Manor Lab, WABC/Salk Institute.

Deep learning approaches have also recently been reported to be ~30% more accurate than humans at analyzing dark matter signals in composite maps of the cosmos.


If you suspend your disbelief (or remain scale-agnostic), you might think these two previous images (cell interior and cosmic-scale galaxy-mapping) come from the same data sets.

Quantum computing is the next frontier of which we are pushing the boundary.** I follow the updates daily in this space, and have participated on several developmental panels. As of the day of writing this, our boundary of humanity is a 53-qubit quantum computer built and being tested for client use by IBM. I can recommend some reading for you on this topic (see below), but suffice it to say that our current-generational computing power, which has largely been following, and constrained by, Moore’s Law, will perform a radical scale-shift in the next decade as we get more facile at harnessing quantum computing for practical use. We will then be able to think beyond the technological restraints we have now, because we’ll have available computing power that we can’t even at the moment begin to ask worthwhile questions to harness. The technology itself will push back on us as a species to come up with ‘smarter’ and ‘better’ questions in a nearly-limitless information feedback loop.


First-ever image of quantum entanglement.

We need to not forget about the inertia present in people within organizations. The hardest part of implementation of AI is not the technology, it’s the people. This was a paraphrase of a statement I made at an AI and Machine Learning conference, which was, “Culture trumps AI: The inertia that exists within organizational culture is the most significant impediment to the appropriate infusion of AI/ML into our work functions.”

I think the other biggest factor is getting adopters to think differently than they have. If we’re encumbered by our old ways of thinking, approaching problems, and how to ask questions, we can’t fully utilize what AI, ML, and DL have to offer us. Especially in a more data-rich future. 

*This was actually penned by Michael Crichton, the author of Jurassic Park, as “life will find a way.”
**Until you opened the magazine to find this article, Erwin Schrödinger would have opined that perhaps it was in a state of both ‘written’ and ‘unwritten’ simultaneously. Thank you for collapsing its wave function.


Further Reading
  • Gribbin, J. (1984). In Search of Schrödinger’s Cat: Quantum Physics and Reality. Bantam.
  • Rieffel, E. & Polak, W. (2011). Quantum Computing: A Gentle Introduction. MIT Press. (Author’s note: available for free at: http://mmrc.amss.cas.cn/tlb/201702/W020170224608150244118.pdf).
  • Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-Powered Organization. Harvard Business Review. July/August 2019.


Ben Locwin

Ben Locwin, PhD, MBA, MS, MBB is a Healthcare Futurist, medical policy advisor, and public speaker. Annually he is commissioned to do a circuit on the future of various industries. “Importantly,” Dr. Locwin says, “looking back at the veracity of prior predictions and refining mental models is the only way to objectively get better at it.”

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters