Robust Assay Designs

By Tim Wright, Editor, Contract Pharma | October 11, 2016

Easing the transition from preclinical to clinical research

Advancing a compound from animal studies to clinical trials brings sudden changes to a drug candidate study’s focus, scale and complexity, producing new and sometimes unwelcome expenses. But decisions made during the preclinical stage can ensure your investment against future surprise costs. Contract Pharma asked Franklin Spriggs and Ashley Brant of AIT Bioscience, a CRO equally adept at small and large molecule analysis, how sponsors and CROs can prepare for phase I and beyond while developing preclinical protocols. Spriggs is the company’s ligand binding assay group leader, and Brandt is its small molecule program manager.

Contract Pharma: What are the main challenges when transitioning small molecule studies from the preclinical to the clinical stage? 
Ashley Brant:
One of the biggest differences in this transition is that preclinical studies use much higher dosing levels than clinical studies. Animal studies use high doses to induce toxic effects, but in humans you want therapeutic doses that are significantly lower. The dosing dynamic range sometimes changes up to 100-fold. As a result, your focus swings from assay range to assay sensitivity.

Another challenge is that many animals used in toxicology studies are basically genetically homogenous, so the minor differences in matrix between animals has negligible impact on the assay. However, human plasma matrix can vary greatly between clinical subjects. To account for this, we recommend screening additional lots of blank matrix to increase our assay’s specificity. Plasma samples from diseased subjects add to the complexity. 

CP: What strategies are available to sponsors to protect their small molecule studies from these problems?
Protecting small molecule studies really comes down to designing a robust assay from the outset that anticipates these changes. While not required by FDA bioanalytical guidelines, we highly recommend sponsors procure a stable isotope-labeled internal standard. This standard behaves exactly the same as the therapeutic except it appears a few units heavier in mass spectrometer data. If something unexpected happens in the run, such as loss of sample, it‘s a valuable reference that helps us remedy the problem.
Custom synthesis of an isotope-labeled standard can cost upwards of $15,000, so sponsors may be reluctant to invest in one especially in the preclinical stage. However, it makes for a much more robust assay and can significantly reduce assay development lead times & costs.

CP: What are the main challenges when transitioning large molecule assays from the preclinical to the clinical stage?
Franklin Spriggs:
The need for more sensitive clinical assays due to lower doses is an issue that must be addressed for large molecule studies. However, ligand binding assays (LBAs) are much more affected by the heterogeneity of disease-state matrices than small molecule drugs. In preclinical development, LBAs are not validated using diseased matrix, since animals are not the diseased population. Starting in phase II, assays may require validation with both normal healthy blank matrix and diseased blank matrix, since the two can be very different. 

Another concern is drug solubility. If the therapeutic target is soluble, meaning free flowing in plasma, this can cause an artificial underestimation of the drug concentration levels. This needs to be discussed ahead of validation to account for this issue. This is not an issue in GLP preclinical toxicology because the animals are not in a ‘diseased state’. 

CP: How do large molecule programs anticipate setbacks introduced in phases II and III? 
We anticipate setbacks by understanding that the success of an LBA is highly influenced by critical reagents, from the capture and detection compounds to the serum matrix and even the assay plates. We particularly recommend investing in capture and detection compounds, such as antibodies, customized for the therapeutic. While this is not mandated by the FDA, the agency does highly recommend their use.

Developing custom critical reagents can be a hefty investment, but clinical studies samples may be analyzed in batches over the course of several years, and different analysts may be running the samples. Custom critical reagents can be made in bulk, fully characterized as a single manufacturing lot, and stored for long periods of time to minimize the lot-to-lot variability that is often seen with commercial antibodies, and which is a leading cause of run failures. Even when this isn’t a problem, purchasing critical reagents in small batches requires you to validate those reagents every time you run the assay, adding time and expense to every run.

CP: How can CROs plan for clinical data turnaround needs? 
When studies move into the clinic, the volume of samples escalates very quickly because there are often more sites, more subjects and more time points taken. That’s why it is important to make sure you have a robust assay from the outset. It is very helpful to have been involved in the preclinical bioanalysis. Exposure to the compound allows us a better understanding of the molecule so we can predict and understand the challenges we will face when adjusting the assay for clinical studies.

FS: For phase I first-in-human studies, sponsors usually need quality controlled data in approximately 5 business days from sample receipt at the bioanalytical lab. To meet these needs, it is critical for us to collaborate with the toxicology and clinical facilities regarding dosing dates, planned shipment dates, data format requirements, logistics of labels and plasma transfer tubes, and beyond. One unique approach that AITB uses to facilitate higher throughput of data review is the use of an ELN-based Quality Management System. The system is extremely valuable as it helps prevent errors before they occur in the lab and gives the sponsor remote access to review data, speeding the delivery of data. Other labs use ELNs, but primarily to document what they did, not for real-time quality control checks of data inputs and catching
errors before they occur in the lab.