On July 24, 2012, the Oncologic Drugs Advisory Committee (ODAC) met to discuss the merits of a novel approach to interpreting radiological images from clinical trials that target solid tumors, and for which progression-free survival (PFS) is a key endpoint. Until now, the FDA has typically required that the scans used to determine the date of progression be reviewed centrally. The potential change would permit sponsor companies to rely instead on investigator assessments, with only a sampling of images being subjected to a central review as a test of reliability.
The intent of the proposal was to speed the trial process and reduce trial costs by reducing the cost of image reviews. Management of imaging — including centralized, independent review — typically accounts for $1-3 million of the cost of a Phase III trial.
We will explore the implications of giving investigator sites responsibility for determining the progression date. We will consider the scientific and logistical issues, estimate the magnitude of the likely benefits, and discuss some of the challenges that must be resolved in order to make this process work and reduce the risk for trial sponsors. We will also present some of the unanswered questions about the statistical methods that would be used to create the samples and compare results.
At this writing, the FDA is evaluating the proposed change, which could be included in the agency’s forthcoming Guidance on Imaging Endpoints in Clinical Trials. Given this possibility, we also offer suggestions on how companies might begin to experiment with a decentralized approach.
In the 1990s there were several prominent instances of serious problems with trial data quality resulting from local interpretations of imaging studies, which led to a developing consensus that central reads were key for successful trial conduct. The FDA’s first reference to the need for independent reads was in the agency’s industry guidance published in 2007.1 The document stated, “At a minimum, the assessments should be subjected to a blinded independent adjudication team, generally consisting of radiologists and clinicians.” In August, 2011, the FDA released a draft guidance document on its standards for clinical trial imaging endpoints, which allowed for exceptions to this requirement:
“. . . a site-based image interpretation may be reasonable in a randomized, double-blinded clinical trial of an investigational therapeutic drug where the imaging technology is widely available, the image is easily assessed by a clinical radiologist, and the investigational drug has shown little or no evidence of unblinding effects.2 In this situation, the use of randomization and blinding controls bias in image interpretation.”3
This draft, which pre-dates the discussion that took place in July, indicates that local reads might be sufficient when certain conditions are met. However, one condition is that the investigational drug not have a side effect profile or dosing schedule that effectively unmasks the patient’s treatment arm, and most investigational drugs do have such potentially unblinding properties, so local reads would rarely be acceptable.
Since the publication of the FDA’s draft recommendations, the need for centralized review has been questioned. At the July ODAC meeting, statisticians from the National Cancer Institute (NCI) and from the pharma industry presented the results of analyses conducted to evaluate the degree of agreement between local and central interpretations in 27 solid tumor trials. The assessments of progression date disagreed about one-third of the time on individual cases, approximately equal to the rate of disagreement between two independent central readers. However, the aggregate results (the hazard ratios showing treatment effects) were highly concordant. The statisticians, Lori Dodd and Ohad Amit, also presented two different statistical methods for comparing the aggregate results from a sample of central reads to the results from local reads, arguing that such methods could detect bias in local reads. Based on these presentations, the ODAC has recommended to the FDA that sponsors be able to “rely on investigator assessments, augmented by audits [central reads of a test sample] designed to detect bias.”4
In this approach, local read results would be collected, and all scans would be collected centrally. Central review would be performed on a sample of the cases, and the results of the local reads would be compared to the results from this sample, not on an individual basis but on the basis of aggregated statistical properties. If a certain threshold of agreement is found, no further interpretation is required, and the results from the local reads are considered confirmed. If there were significant disagreement, presumably the process would revert to the prior standard, and a complete central review would be required. The details of the sampling process were not determined, and ODAC statisticians expressed some concern about the complexity when many sites were involved.
The sponsor, or a central imaging management organization, would continue to audit investigator sites, as is typically done with central reviews. If it chose to do so, the FDA could also perform quality assurance audits of investigator sites, just as it now does with firms that offer centralized image interpretation. During such audits, reviewers are asked to explain their interpretation process and to defend their results for selected cases.
Relying on local sites to determine PFS poses some inherent difficulties in avoiding bias and excess variability. It is difficult to ensure that a local reader remains blinded to the patient’s study arm. In the clinical care setting, oncologists and radiologists often discuss patients and review scans together. Much about a patient’s condition can be revealed in such an exchange, possibly jeopardizing the reader’s objectivity. As mentioned above, an oncology drug’s side effect profile often makes it clear to a clinician whether or not a patient is receiving the investigative medication. A central review is at least procedurally blinded to these confounders.
Some degree of variability will inevitably arise from the diversity of equipment being used at each site to read images and from the varying expertise of the readers. We recently performed a survey of sites to determine who typically reads scans locally, and how they are trained. The results from approximately 1,200 sites show that the reader is a radiologist dedicated to the study (and thus familiar with its specific response criteria) only 41% of the time. The rest of the time, the read is done by whatever radiologist is on call, a non-radiologist investigator, another physician, or even other study staff (such as research nurses and coordinators). More than half of sites (55%) reported that readers were trained in the assessment criteria only through independent self-study or not at all.
Any radiological interpretation has natural random variability. The rate of disagreement on the date of progression between any two readers (a local and a central reader, or two independent central readers), is approximately 30-50%. At a central facility, image interpretation to measure efficacy endpoints in late phase trials is typically done using two primary readers to assess progression, with a third reader adjudicating disagreements. This reduces the likelihood that the date of progression will be determined incorrectly at least two-fold, significantly decreasing the variability of the measurement. Such a “2+1” read design may be difficult to implement at study sites.
“The statistical methodologies used in the study presented to the ODAC were unassailable and not in question,” said David Raunig, Ph.D., vice president of Informatics at ICON Medical Imaging. “The idea of reverting to decentralized reviews is not a bad one in theory, but it would pose a number of logistical challenges to preserving the integrity of the data. They can be surmounted, but at what cost?”
The primary argument for this new approach is the anticipated savings in the cost associated with central reads. While in a competitive environment, every opportunity for savings warrants consideration, it is important to develop an objective and realistic view of the potential savings.
The total cost of a Phase III trial can easily reach $100 million, only a modest fraction (one to three percent) of which is related to imaging. Performing central reads on only a sample of images would indeed save sponsor companies a portion of their trial direct imaging costs, but by no means would it eliminate them all.
Certain activities are still required if images are collected, and any portion of them read centrally. The proposed process would keep prospective image collection in place, since there is a possibility that a complete central review would be needed. If scans are not collected prospectively during a trial, experience shows that retrospective collection of image data typically results in 30% of the image data being unavailable. The following activities would have to be carried out by the central facility:
- Developing auditable documents to describe the process of data collection and independent review (the imaging charter and related documents),
- Training sites to obtain protocol-compliant images and to submit them,
- Actually collecting the scans and putting them through quality control,
- Programming an electronic data-capture system,
- Training readers to perform central reads, and
- Project management costs.
To attempt to quantify the potential savings, our company examined nine recently conducted Phase III solid tumor trials, and calculated the cost savings that would have been realized if central review had been conducted on a sample consisting of 30% of the total images. All other activities were assumed to be unchanged. The results are summarized in Table 1.
In the analysis above, the average of 22% savings on imaging costs (compared to a full central review), is based on the assumption that no complete re-reads are required. If, as in Dr. Dodd’s simulations, there is a 16% chance of the trial going on to a complete central read based on the sample results, the average savings per trial decreases to 18%. If a $100 million trial had $3 million in imaging-related costs, the expected cost savings would be thus be $540k, or 0.5% of the entire cost of the trial.
The above estimates assume that all other trial costs remain the same. However, there would be other costs introduced that would offset the savings from having to read only a sample of images centrally. These costs, which have yet to be quantified, would stem from factors such as:
- Relying on hundreds of local readers will increase the variability in the endpoint measurements, which may require larger trials with more subjects to compensate.
- Sponsors will need to develop contracts with sites’ imaging departments and reimburse radiologists for their work in completing study assessments. Currently, contracts are usually made with investigators who in turn take care of reimbursing radiologists as needed.
- Investigator sites may not be prepared to have their procedures and techniques scrutinized in an FDA audit. The risk to the trial would need to be mitigated with training, procedures, and documentation — including storing annotated images to be able to document decisions on individual cases. All of this would require investment from sponsors.
- The services of Ph.D.-level statisticians will be required to build the sample methodology into the trial design and to develop the technique for comparing the central read sample to the local reads. Interim audits conducted prior to the end of the study, if not done by the sponsor, will need to be contracted.
- Trial delays in those instances where the central reading of samples uncovered bias, requiring that all scans that had already been read locally be re-read centrally. Since sponsor timelines are not easily delayed, the need for such a “do-over” could have significant financial ramifications.
This new approach requires further clarification and development. Although the ODAC agreed on the change in principle, many of the details remain to be determined. Some of the questions that need to be answered are:
- How best to select samples for the central review (a completely random sample might over-represent large sites and miss smaller sites altogether)?
- What portion of the total number of images would constitute a sufficient sample for central review?
- What statistical methods would be used to compare the results from the central interpretation to the results from the on-site interpretation?
- What would be the acceptable level of disagreement between cenral and on-site interpretations ?
- What would be the procedure, should the level of disagreement exceed the established acceptable limit?
Moving Forward with Appropriate Caution
Based on the discussions at the ODAC meeting, it is expected that the FDA’s final guidance will adopt the recommendations in some form. As sponsor companies move to implement the new methods in their future trials, here are some precautions they can take to minimize risk to their studies:
- Make sure that a trial that is being considered for this new method is appropriate for it. The ODAC specified that if a trial is small, or the expected effect size is moderate, or the tumor type being considered is particularly difficult to assess, the new method is probably not appropriate.
- When designing the imaging aspects of the study protocol, seek advice from imaging experts as early in the process as possible.
- Design a detailed methodology for sampling the images to be read centrally and for comparing the results to the site-based results, with appropriate input from statistical experts.
- Confer with the FDA on the proposed design, preferably at the end of Phase II meeting.
- To minimize risk, ensure that readers at sites are well trained and committed to applying the tumor response criteria that have been set for the trial. Be aware that formal response criteria are almost never used in daily radiological practice, so only radiologists at academic oncology centers are likely to be familiar with them.
- Implement a system for capturing site assessments in a standardized way and standardize the means by which the local readers interpret what is captured.
- If possible, have local readers store annotated images to allow for auditability.
- Where possible, contract directly with the imaging departments at investigator sites to perform the reads. For sites that offer integrated oncology and radiology services, this would entail designating a local radiologist to read trial scans and having the radiologist sign a 1572 form. This might prove to be more difficult in cases where patients get their imaging at a separate facility, and in cases where hospitals do not allow sponsors to deal directly with radiology departments. Sponsors will have to consider how such conditions will affect their site selection criteria.
- Guidance for Industry Clinical Trial Endpoits for the Approval of Cancer Drugs and Biologics, U.S. Department of Health and Human Services, May 2007.
- See the guidance for industry Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics.
- Guidance for Industry Standards for Clinical Trial Imaging Endpoints, Draft Guidance, U.S. Department of Health and Human Services, August 2011.
- Goldberg, Paul, "FDA to Move Away from Central Radiology to Investigator Review in PFS Endpoint Trials," The Cancer Letter, Vol. 38 No 30, July 27, 2012.
Gregory V. Goldmacher, M.D., Ph.D. is a radiologist, currently the director of Medical & Scientific Affairs and head of Oncology Imaging at ICON Medical Imaging. Dr. Goldmacher provides medical, scientific, and regulatory leadership for trials in every phase of clinical development, and trains physicians and study staff worldwide in clinical trial imaging methods. He can be reached at email@example.com.