Kevin Merlo and Tim Moran, Dassault Systèmes Biovia09.01.17
As biologic drug providers endeavor to create effective protein-based medicines, immunogenicity remains a complex challenge. Immunogenicity is an immune response against a therapeutic antigen causing that medication to lose its effectiveness over time and potentially resulting in serious illness. In some cases immunogenicity is wanted, such as in a vaccine. Unwanted immunogenicity is a hot topic today, partly because it still isn’t completely understood.
There are many reasons the human body may have an immune response to a biotherapeutic. Genetics can be a contributing factor, as a drug may not produce the same response in everybody. Chemical modifications to the biologic, including antibody drug conjugates may be seen as a non-self antigen by the body and cause immunogenicity. Manufacturing can also contribute. For example, if there is a small amount of aggregation (clumping or binding) or if a protein folds differently than expected, the body may identify it as a foreign substance.
Because biotherapeutic development becomes increasingly costly as it moves downstream, it is important to anticipate immunogenicity as far upstream as possible in the drug discovery process. Organizations wisely strive to minimize costs by predicting and preventing any type of toxicity early (see Figure 1).
Researchers can use predictive analytics to identify immunogenicity in biotherapeutics during the discovery phase of drug development. Predictive sciences enable researchers to mine data, make predictions, and gather actionable insight to move development forward or discard failures. Predictive tools and virtual experiments can be combined with real experiments to provide scientists with more key performance indicators (KPIs) that drive informed decisions.
“The best qualification of a prophet is to have a good memory,” English statesman George Savile said centuries ago. His insight is relevant to today’s predictive sciences, which depend on past data for reliability. The value of a predictive approach depends on the quality and variety of the data supporting it. As such, companies that want to include predictive analytics in their drug development process should encourage data standardization and pre-competitive data sharing. Information systems need to be easily accessible to a wide range of diverse users and support collaboration among specialists with unique expertise.
To go far, go together
Attrition rate is a problem in the pharmaceutical industry.1 A process that spawns thousands of molecules in the research phase typically concludes with only a few viable therapeutic products. This increases the cost of bringing new drugs to market, which leads to patients paying higher prices for the medications they need. When so much depends on scientists making the right decision at the right moment, it is essential to provide them with as much information as possible so they can be precise.
Predictive tools are based on established algorithms that have been validated over time. It is not difficult for statistical analysts to build predictive models. But the degree to which the answer from a model can be trusted depends on what data is used. More data results in more reliable predictive models. There is an abundance of biological data in a variety of different formats in enterprises and research organizations across the industry. But much of it is sequestered behind company firewalls. A key challenge today is to aggregate more of this data in a standardized format to improve predictability.
Is it reasonable to expect companies to voluntarily share proprietary data to advance scientific knowledge for everyone? Secure third-party platforms could make it possible to collect and centralize biological data from participating organizations to create predictive models. Data would be protected, so companies would only have access to data they provide while benefitting from the collective knowledge of all contributors. If such a strategy can improve scientific accuracy for the entire industry and reduce the cost of medicine for patients, everyone wins.
Initiatives such as the Pistoia Alliance (www.pistoiaalliance.org) and Allotrope Foundation (www.allotrope.org/) are successful precedents for this type of cooperation among life science organizations. In these projects, experts from life science companies and other groups come together to share pre-competitive strategies for establishing a common data format and ontology for the industry (see Figure 2).
Let’s get personal
When researchers discover a drug candidate that has the potential to cure a large percentage of the patient population, they don’t discard it when studies reveal immunogenicity. The story is far from over. They look for ways to decrease immunogenicity without affecting the efficacy of the treatment. Active ingredients affect different segments of the human population in different ways, which raises further questions. Is the therapeutic defined as successful when ten, five, or zero percent of the population shows an immunogenic response? What level of immunogenicity is acceptable?
If personalized medicine is the path forward, then researchers and developers must be able to isolate specific populations. In the near future, scientists may use software that presents a world map depicting segments of the human population that are likely to have an immunogenic response to a medication. Such solutions are on the horizon, as are tools that will support even deeper levels of personalized analysis. Because individuals in specific populations will not all react in the same way to the same drug, there is a need for tools and tests that can predict individual immune responses.
In early stages of drug development, most immunogenicity predictions are based on in vitro testing. Then testing is performed on animals. Later, as a drug is given the go-ahead for human testing, analysis becomes more challenging and predictability less certain. It is difficult to extrapolate from animal tests precisely what the response will be in a human organism. How can toxicologists be sure that what they observed in animal tests is reflected in human tests? What segment of the human population should be considered as subjects for testing? What characteristics should the human test subjects have?
Pharmaceutical organizations need tools, information, and protocols that can help them answer these questions, but new software and algorithms are not enough. The industry also needs more data about the human patient population. This underscores the importance of collaboration among all kinds of experts within organizations and the potential advantages of pre-competitive alliances that span the industry. These cooperative strategies cultivate collective insight that expands scientific knowledge for everyone.
Perspective is everything
Science is about reproducing results. When a company develops a drug, every experiment and test must be recorded. Organizations must be able to prove to regulatory bodies such as the U.S. Food and Drug Administration (FDA) that development results are reproducible. So it is important that rules are clearly defined and enforced for virtual experiments, wet lab experiments, and all associated data. Different software solutions can provide a predictive model if the right data is used. The challenge is to maintain that data so the model and its results are reproducible in the future.
Data must be complete, consistent, correct, and contextualized by metadata. And it must be the original data as it was captured, not changed in any way. Procedures for data governance should be in place to ensure data is managed appropriately. This can include documenting how companies safeguard the quality and integrity of their data. It can also include management of where data resides, who has access to it, what is done with it, and all the policies governing secure and efficient data management. Proper lifecycle management of virtual experiments, wet lab experiments, and all associated data is as important as the results of the experiments.
Accurate data is necessary to build good models. A platform approach to information technology can ensure that all relevant data is represented while also helping to enforce data integrity. But beyond data management, scientists need to extract knowledge from data. A single software solution cannot do that. Scientists must have the ability to visualize and leverage data from many points of view—including physical, biological, statistical, and analytical.
Scientists can attain this perspective by giving virtual experiments and real experiments equal weight. Pharmaceutical testing relies heavily on wet lab experiments. Virtual experiments and associated results can illuminate additional criteria to help scientists make better decisions. Leaders at pharmaceutical organizations should establish a company culture that values both approaches, creating holistic workflows that integrate both types of experimentation. Most importantly, organizations should apply the same protocols to both wet lab and in silico experiments to foster good decisions across both types of experiments.
These best practices won’t pay off unless analytical tools are easily accessible to all scientific roles. Siloed knowledge and know-how can undermine usability and collaborative innovation. Many toxicologists and biologists don’t want to work with mathematics, differential equations, or statistical analysis. But predictive modeling depends on mathematics and statistics. To clear this industry hurdle, software developers must provide analytical tools that are easy to use for people who are not math experts.
A neural network should not be presented in the same way to a toxicologist and a statistician in the same company. Ideally, a technology solution should provide an interface that offers appropriate usability depending on the specific expertise of each scientist while always accessing the same back-end software and data. Presentation should vary based on functional roles; analytical results should not.
Despite consensus about the value and importance of collaboration, isolated silos of expert proficiency remain an industry challenge. Some mathematicians and statisticians don’t collaborate optimally with biologists. Some biologists don’t collaborate effectively with toxicologists. However, these experts can all contribute to improving the predictability of immunogenicity in biotherapeutics. Technology providers need to support them all with software solutions and platforms that help them easily share information along the drug development continuum.
Standards align virtual and real experiments
Data standards facilitate secure collaboration and provide a common foundation allowing scientists to mine data for patterns and insights that can improve predictability and accelerate progress. It is important for companies to enforce data integrity across all the data they collect. Visibility into all enterprise processes and knowledge is an essential element of a holistic, standards-based approach to virtual and real experiments. Today, standard operating procedures clearly explain how wet lab experiments should be performed. But similar guidelines for virtual experiments are scarce. This limits their value, but a resolution to the problem is within reach.
Consider the rising success of an adjacent technology. Over the past few years, the consumer market for virtual reality (VR) headsets has experienced enormous growth made possible by the adoption of common standards. Thanks to guiding parameters that ensure the interoperability of products and content, VR is no longer in the realm of science fiction. It is emerging as a powerful and compelling interface for entertainment, information, simulations, and training. Similarly, virtual environments will deeply impact future scientific and medical research. But the industry needs specific standards that address predictive modeling and virtual experimentation if these strategies are to reach their full potential.
Developing such standards may be too great a challenge for any single pharmaceutical company or technology provider. The answer may need to come from regulatory entities such as the FDA. These kinds of governing groups have enough influence to define standards around virtual experiments and predictive modeling. The FDA has already done this for wet lab experiments.2 Guidance may come in the form of advice rather than regulations. But more and more, advice from these official entities impacts the practices of drug manufacturers. The result has been more efficient pharmaceutical product development from research to commercialization—and medicines that offer greater efficacy and safety.
As experts strive to develop successful therapeutics for increasingly personalized patient populations, effective collaboration and pre-competitive cooperation can help them predict immunogenicity with greater accuracy. Industry professionals need solutions that allow reproducibility of experimental results, enforce data integrity, and offer work environments that are easily accessible to people with a wide range of expertise. Equipped with standards and protocols that consider virtual and real experiments to be equal partners in the development of biotherapeutics, scientists can make better-informed decisions, identify potential immune responses earlier, and accelerate bench breakthroughs to bedside.
References
Kevin Merlo is biosafety applications manager at Dassault Systèmes Biovia. His predictive drug safety team develops applications that use predictive modeling to improve the process of anticipating the toxicity of new compounds. His previous positions include software developer at SOBIOS (acquired by Dassault Systèmes) where he contributed to the development of software for biologists based on machine learning algorithms.
Tim Moran is director of life science research product management at Dassault Systèmes Biovia. His early work in the industry focused on immunomodulation and imaging studies of effects on T-cell lymphocyte homing. He has held several managerial positions in image informatics as well as roles in life science research, next- generation sequencing, sequence analysis, biotherapeutics, and registration.
There are many reasons the human body may have an immune response to a biotherapeutic. Genetics can be a contributing factor, as a drug may not produce the same response in everybody. Chemical modifications to the biologic, including antibody drug conjugates may be seen as a non-self antigen by the body and cause immunogenicity. Manufacturing can also contribute. For example, if there is a small amount of aggregation (clumping or binding) or if a protein folds differently than expected, the body may identify it as a foreign substance.
Because biotherapeutic development becomes increasingly costly as it moves downstream, it is important to anticipate immunogenicity as far upstream as possible in the drug discovery process. Organizations wisely strive to minimize costs by predicting and preventing any type of toxicity early (see Figure 1).
Researchers can use predictive analytics to identify immunogenicity in biotherapeutics during the discovery phase of drug development. Predictive sciences enable researchers to mine data, make predictions, and gather actionable insight to move development forward or discard failures. Predictive tools and virtual experiments can be combined with real experiments to provide scientists with more key performance indicators (KPIs) that drive informed decisions.
“The best qualification of a prophet is to have a good memory,” English statesman George Savile said centuries ago. His insight is relevant to today’s predictive sciences, which depend on past data for reliability. The value of a predictive approach depends on the quality and variety of the data supporting it. As such, companies that want to include predictive analytics in their drug development process should encourage data standardization and pre-competitive data sharing. Information systems need to be easily accessible to a wide range of diverse users and support collaboration among specialists with unique expertise.
To go far, go together
Attrition rate is a problem in the pharmaceutical industry.1 A process that spawns thousands of molecules in the research phase typically concludes with only a few viable therapeutic products. This increases the cost of bringing new drugs to market, which leads to patients paying higher prices for the medications they need. When so much depends on scientists making the right decision at the right moment, it is essential to provide them with as much information as possible so they can be precise.
Predictive tools are based on established algorithms that have been validated over time. It is not difficult for statistical analysts to build predictive models. But the degree to which the answer from a model can be trusted depends on what data is used. More data results in more reliable predictive models. There is an abundance of biological data in a variety of different formats in enterprises and research organizations across the industry. But much of it is sequestered behind company firewalls. A key challenge today is to aggregate more of this data in a standardized format to improve predictability.
Is it reasonable to expect companies to voluntarily share proprietary data to advance scientific knowledge for everyone? Secure third-party platforms could make it possible to collect and centralize biological data from participating organizations to create predictive models. Data would be protected, so companies would only have access to data they provide while benefitting from the collective knowledge of all contributors. If such a strategy can improve scientific accuracy for the entire industry and reduce the cost of medicine for patients, everyone wins.
Initiatives such as the Pistoia Alliance (www.pistoiaalliance.org) and Allotrope Foundation (www.allotrope.org/) are successful precedents for this type of cooperation among life science organizations. In these projects, experts from life science companies and other groups come together to share pre-competitive strategies for establishing a common data format and ontology for the industry (see Figure 2).
Let’s get personal
When researchers discover a drug candidate that has the potential to cure a large percentage of the patient population, they don’t discard it when studies reveal immunogenicity. The story is far from over. They look for ways to decrease immunogenicity without affecting the efficacy of the treatment. Active ingredients affect different segments of the human population in different ways, which raises further questions. Is the therapeutic defined as successful when ten, five, or zero percent of the population shows an immunogenic response? What level of immunogenicity is acceptable?
If personalized medicine is the path forward, then researchers and developers must be able to isolate specific populations. In the near future, scientists may use software that presents a world map depicting segments of the human population that are likely to have an immunogenic response to a medication. Such solutions are on the horizon, as are tools that will support even deeper levels of personalized analysis. Because individuals in specific populations will not all react in the same way to the same drug, there is a need for tools and tests that can predict individual immune responses.
In early stages of drug development, most immunogenicity predictions are based on in vitro testing. Then testing is performed on animals. Later, as a drug is given the go-ahead for human testing, analysis becomes more challenging and predictability less certain. It is difficult to extrapolate from animal tests precisely what the response will be in a human organism. How can toxicologists be sure that what they observed in animal tests is reflected in human tests? What segment of the human population should be considered as subjects for testing? What characteristics should the human test subjects have?
Pharmaceutical organizations need tools, information, and protocols that can help them answer these questions, but new software and algorithms are not enough. The industry also needs more data about the human patient population. This underscores the importance of collaboration among all kinds of experts within organizations and the potential advantages of pre-competitive alliances that span the industry. These cooperative strategies cultivate collective insight that expands scientific knowledge for everyone.
Perspective is everything
Science is about reproducing results. When a company develops a drug, every experiment and test must be recorded. Organizations must be able to prove to regulatory bodies such as the U.S. Food and Drug Administration (FDA) that development results are reproducible. So it is important that rules are clearly defined and enforced for virtual experiments, wet lab experiments, and all associated data. Different software solutions can provide a predictive model if the right data is used. The challenge is to maintain that data so the model and its results are reproducible in the future.
Data must be complete, consistent, correct, and contextualized by metadata. And it must be the original data as it was captured, not changed in any way. Procedures for data governance should be in place to ensure data is managed appropriately. This can include documenting how companies safeguard the quality and integrity of their data. It can also include management of where data resides, who has access to it, what is done with it, and all the policies governing secure and efficient data management. Proper lifecycle management of virtual experiments, wet lab experiments, and all associated data is as important as the results of the experiments.
Accurate data is necessary to build good models. A platform approach to information technology can ensure that all relevant data is represented while also helping to enforce data integrity. But beyond data management, scientists need to extract knowledge from data. A single software solution cannot do that. Scientists must have the ability to visualize and leverage data from many points of view—including physical, biological, statistical, and analytical.
Scientists can attain this perspective by giving virtual experiments and real experiments equal weight. Pharmaceutical testing relies heavily on wet lab experiments. Virtual experiments and associated results can illuminate additional criteria to help scientists make better decisions. Leaders at pharmaceutical organizations should establish a company culture that values both approaches, creating holistic workflows that integrate both types of experimentation. Most importantly, organizations should apply the same protocols to both wet lab and in silico experiments to foster good decisions across both types of experiments.
These best practices won’t pay off unless analytical tools are easily accessible to all scientific roles. Siloed knowledge and know-how can undermine usability and collaborative innovation. Many toxicologists and biologists don’t want to work with mathematics, differential equations, or statistical analysis. But predictive modeling depends on mathematics and statistics. To clear this industry hurdle, software developers must provide analytical tools that are easy to use for people who are not math experts.
A neural network should not be presented in the same way to a toxicologist and a statistician in the same company. Ideally, a technology solution should provide an interface that offers appropriate usability depending on the specific expertise of each scientist while always accessing the same back-end software and data. Presentation should vary based on functional roles; analytical results should not.
Despite consensus about the value and importance of collaboration, isolated silos of expert proficiency remain an industry challenge. Some mathematicians and statisticians don’t collaborate optimally with biologists. Some biologists don’t collaborate effectively with toxicologists. However, these experts can all contribute to improving the predictability of immunogenicity in biotherapeutics. Technology providers need to support them all with software solutions and platforms that help them easily share information along the drug development continuum.
Standards align virtual and real experiments
Data standards facilitate secure collaboration and provide a common foundation allowing scientists to mine data for patterns and insights that can improve predictability and accelerate progress. It is important for companies to enforce data integrity across all the data they collect. Visibility into all enterprise processes and knowledge is an essential element of a holistic, standards-based approach to virtual and real experiments. Today, standard operating procedures clearly explain how wet lab experiments should be performed. But similar guidelines for virtual experiments are scarce. This limits their value, but a resolution to the problem is within reach.
Consider the rising success of an adjacent technology. Over the past few years, the consumer market for virtual reality (VR) headsets has experienced enormous growth made possible by the adoption of common standards. Thanks to guiding parameters that ensure the interoperability of products and content, VR is no longer in the realm of science fiction. It is emerging as a powerful and compelling interface for entertainment, information, simulations, and training. Similarly, virtual environments will deeply impact future scientific and medical research. But the industry needs specific standards that address predictive modeling and virtual experimentation if these strategies are to reach their full potential.
Developing such standards may be too great a challenge for any single pharmaceutical company or technology provider. The answer may need to come from regulatory entities such as the FDA. These kinds of governing groups have enough influence to define standards around virtual experiments and predictive modeling. The FDA has already done this for wet lab experiments.2 Guidance may come in the form of advice rather than regulations. But more and more, advice from these official entities impacts the practices of drug manufacturers. The result has been more efficient pharmaceutical product development from research to commercialization—and medicines that offer greater efficacy and safety.
As experts strive to develop successful therapeutics for increasingly personalized patient populations, effective collaboration and pre-competitive cooperation can help them predict immunogenicity with greater accuracy. Industry professionals need solutions that allow reproducibility of experimental results, enforce data integrity, and offer work environments that are easily accessible to people with a wide range of expertise. Equipped with standards and protocols that consider virtual and real experiments to be equal partners in the development of biotherapeutics, scientists can make better-informed decisions, identify potential immune responses earlier, and accelerate bench breakthroughs to bedside.
References
- Waring, M. J., Arrowsmith, J., Leach, A. R., Leeson, P. D., Mandrell, S., Owen, R. M., and Wallace, O. (2015). An analysis of the attrition of drug candidates from four major pharmaceutical companies. Nature Reviews Drug Discovery, 14(7), 475-486.
- https://www.fda.gov/drugs/guidancecomplianceregulatoryinformation/guidances/ucm065014.htm
Kevin Merlo is biosafety applications manager at Dassault Systèmes Biovia. His predictive drug safety team develops applications that use predictive modeling to improve the process of anticipating the toxicity of new compounds. His previous positions include software developer at SOBIOS (acquired by Dassault Systèmes) where he contributed to the development of software for biologists based on machine learning algorithms.
Tim Moran is director of life science research product management at Dassault Systèmes Biovia. His early work in the industry focused on immunomodulation and imaging studies of effects on T-cell lymphocyte homing. He has held several managerial positions in image informatics as well as roles in life science research, next- generation sequencing, sequence analysis, biotherapeutics, and registration.