All the key components of RBM implementation—including pre-study risk planning, adaptive site monitoring with a reduction in Source Data Verification (SDV) and centralized monitoring—do not need to be overly complex to be effective. Indeed, on the contrary, complexity impedes RBM effectiveness. A key principle of RBM methodology should also be the cornerstone to guide RBM implementation in any organization: “Focus on what matters.” Yet the ICH E6 (R2) addendum is not just about RBM, it also encompasses risk-based quality management (RBQM) and oversight. So how can organizations adapt to this paradigm shift?
Understanding the importance of RBM/RBQM
Risk-Based Monitoring, more aptly referred to as Risk-Based Quality Management (RBQM), is now incorporated as a GCP expectation in the most recent ICH E6 (R2) update, driving the industry to implement (RBx) Risk-Based Study Execution, a more holistic risk-based approach to quality management. The motivation for this significant paradigm shift in quality management is explained directly in the introduction section of the ICH E6 guideline. It points to a couple of key factors that have emerged over the past 15 to 20 years. First is the rapidly increasing complexity and cost of clinical research. The second is the transition we’ve made away from largely paper-based research to the modern approach of mostly electronic/digital technologies such as electronic data capture (EDC), electronic patient-reported outcomes (ePRO), interactive response technologies (IRT) and others. This transition from paper has opened a tremendous opportunity to plan and manage clinical research more effectively and efficiently, a very timely development to address the growing crisis in research complexity, timelines and cost.
The increasing complexity and cost of research is clearly evidenced in research published by Tufts Center for the Study of Drug Development (CSDD), showing a rather dramatic increase in the size and complexity of studies from 2005 to 2015. This includes a 68% increase in the median number of procedures prescribed per patient, an 88% increase in the overall volume of patient data collected, and an actual doubling in the number of countries participating in each study.1 It is inevitable that the volume of data collected will only continue to increase, perhaps exponentially, in the coming years with the emergence of wearable technologies for continuous patient monitoring.
The role of FDA
This increase in complexity poses ever-greater challenges to achieving quality outcomes, as both patients and sites are burdened with managing a myriad of requirements placed in front of them. FDA and its stakeholders have an interest in assuring the integrity of clinical trial data and the protection of participants during the conduct of clinical research. Misconduct in clinical research, including, but not limited to the falsification or omission of data in reporting research results, places all subjects in that trial at possible safety risk.
Fraud jeopardizes the reliability of data submitted to FDA and undermines the Agency’s mission to protect and promote public health. FDA and other regulators rely on whistle-blowers and site inspections to detect signs of possible misconduct. The FDA is authorized to perform inspections and utilizes Form 483 (Inspectional Observations) to document and communicate concerns discovered during these inspections to: “list observations made by the FDA representative(s) during the inspection of your facility. They are inspectional observations, and do not represent a final Agency determination regarding your compliance.”
Due to the volume of product submissions, FDA can only inspect a small proportion of clinical trial sites. The determination of which sites to inspect can involve recommendations by clinical and statistical reviewers, CDER’s risk-based site selection tool and FDA inspectors’ judgment and experiences.
An article published in JAMA several years ago presented an analysis of New Molecular Entity (NME) submissions to the FDA over the period (2000 to 2012), which found that 50% of those submissions failed first cycle review.2 While slightly less than half of the failures were eventually approved for marketing, the average delay incurred was 14 months. And most distressing of all is the possibility that up to 32% of all first-cycle failures—or up to 16% of all submissions—were failed due to issues with data quality. This is a startling finding, and one that we should find unacceptable as an industry.
An article sponsored by TransCelerate and published in the DIA Journal in 2014, presented a rigorous analysis of the impact that 100% source data verification (SDV) has on overall data quality.3 And what they found was that SDV impacts only 1% of the eCRF data on average, while 15% of the total cost of clinical research is driven by SDV. It is clear then that change is needed in the way we plan and manage our clinical trials and ensure quality outcomes. We should note that prior to the ICH E6 update, both the FDA and EMA had already provided strong endorsements to move towards RBM and RBQM—in the form of guidance documents that were finalized in 2012.
It is important to understand that Quality by Design (QBD) and RBM, two important concepts promoted in the ICH update, should not be understood as separate ideas but as two phases of the same RBQM paradigm. Both are focused on improving the operational success of clinical research, and both apply the core process of risk assessment and risk mitigation. QBD represents the application of this process starting with the design of your research—and concepts such as patient-centricity and site-centricity completely align with QBD, which have the goal of increasing likelihood of successful research by carefully considering the plight of the patients and sites as a first step. QBD becomes RBM once a study protocol is finalized, at which point risk assessment is repeated with the goal of mitigating any remaining operational risks. Mitigation plans are then applied during study execution, which includes ongoing risk monitoring and a more targeted approach to site monitoring.
Prior to R2, the very first section of ICH E6 was focused on QA/QC. It is still applicable in R2 and it is concentrating on well documented and controlled procedures, paperwork and checklists:
- “Written SOPs”
- “Reported in Compliance with…”
- “Securing agreement”
- “QC applied at each stage”
- “Agreements […] in writing […] in a separate agreement”
With R2, the very first section becomes Quality Management. It is focusing on study execution and efficiency:
- “focus on trial activities essential to […]”
- “The methods […] should be proportionate to the risks”
- “documents should be clear, concise and consistent”
- “avoid unnecessary complexity, procedures, and data collection”
- “operationally feasible”
The evaluation of risks also includes establishing the probability of occurrence, impact and detectability. Within risk control, questions such as “what risk level is acceptable?” and “What actions should I take if I exceed these levels?” should be considered. Risk communication needs to include documentation / communication activities and transparency, while risk review should be a continuous review. Finally, risk reporting should include a final study report including important deviations to predefined tolerance limits.
The notion of CRO oversight
Within ICH E6 R2, the addendum states: “The sponsor should ensure oversight of any trial-related duties and functions carried out on its behalf, including trial-related duties and functions that are subcontracted to another party by the sponsor’s contracted CRO(s).” But what is the CRO approach to Risk-Based Quality Management (and Monitoring)? Because there is a Risk to transfer trial-related duties and functions to a CRO, it is the Sponsor’s responsibility to manage that Risk into their own RBQM system.
RBQM strategy depends on who oversees the monitoring activities, because “the ultimate responsibility for the quality and integrity of the trial data always resides with the sponsor,” the sponsor should perform independent regular Data Quality Assessments (Risk Control). This is more than compliancy checks (i.e. Vendor Audit).
When it comes to RBx, on-site monitoring is an historical Risk Control activity, i.e. the sponsor’s pair of eyes at the site and the connection between the sponsor and the investigator. The introduction of centralized monitoring should provide additional monitoring capabilities, review, that may include statistical analyses, of accumulating data and be supported by appropriately qualified and trained persons (e.g., data managers, biostatisticians). This implies the creation of a new role to support the monitor and new techniques to detect data anomalies. In other words, a second pair of eyes to complement the monitor. RBM is one Risk Control among many others in a RBQM strategy: it controls some of the identified risks by mixing on-site and central monitoring activities.
Monitors are historically not trained to meet some of these new ICH obligations. They either must be trained and/or supported by Data Scientists. Some monitoring activities are now the responsibility of a group of people who were not necessarily prepared to work together or have new (complementary) roles and responsibilities. Section 5.18.6 details the monitoring report, but who oversees reporting what (on-site vs centralized)? And when? There is also confusing terminology, some of which are used interchangeably in error: RBQM, RBM, Reduced SDV, Key Risk Indicators, Remote Monitoring, Central Monitoring, and Quality by Design.
In summary, RBQM is global, it impacts and involves everyone. Monitoring activities are either:
- Activities independent from any Risk-Based approach (Training, GCP compliancy)
- Activities related to the Risk Controls (KRI dashboard review, Data Quality Assessment, SDV?, SDR?) => derived from the risk planning and evaluation process
- Activities related to the Corrective Actions (visit, phone call, SDV?, SDR?) => responding to the output of risk controls
- A Supervised Approach (ICH E6 R2 Section 5.0)—defining Key Risk Indicators and Quality Tolerance Limits to visualise site and study performance
- An Unsupervised Approach (ICH E6 R2 Section 5.18.3)—using technology to automatically detect data quality and integrity issues, such as through Central Statistical Monitoring. These approaches are complimentary, not mutually exclusive
- An Unsupervised approach should also be considered to also address ICH E6 R2 section 5.2.2 (CRO Oversight)
- Miscalibrated Equipment—all the devices supplied by the sponsor to sites in one particular country were mis-calibrated at source
- Site Tampering/Sloppiness—a research coordinator who propagated certain clinical values between visits in order to save time interacting with each patient
- Patient ePRO Diary Fraud—where patient self-assessment diaries were not handed to patients and the site staff fabricated the data on behalf of the patient
How to avoid a form FDA 483
There are many regulatory concerns regarding quality issues, some of which were recently addressed by CDER, FDA at DIA 2015.4 If a clinical site receives a Form FDA 483, then the FDA can consider data from that site to be unreliable, however it depends on:
- The nature of the observation identified on the Form FDA 483
- The extent to which it was found
- Whether it resulted in significant harm to trial participants
- The impact of the observation on critical study data/procedures
- What were the specific observations listed/what was the final classification for the inspection?
- What impact did observations have on the application (e.g., check Clinical Inspection Summaries)?
- What corrective actions did the site take in response to observations?
- An observation may point to an unanticipated, but critical quality issue.
- Headquarters reviews Form FDA 483s, supporting evidence, and any response of the inspected entity and provides a recommendation to Review Division on data reliability/adequacy of trial participant protections
- Protocol be considered blueprint for quality
- Conduct of a risk assessment to identify and evaluate risks to critical study data and processes
- Monitoring is one aspect of the processes and procedures needed
- Monitoring plan be designed to address important and likely risks identified during risk assessment and discourages “One Size Fits All” approach to monitoring
New technologies are available that when used appropriately permit modernization in clinical trial conduct to better ensure human subject protection and data quality, including clinical trial design, management, oversight, conduct, documentation and reporting.
The RBQM repository in Diagram 2 can help prevent organizations from receiving an FDA Form 483 by creating best practices. The FDA, with its strong recommendations cannot insist that organizations upgrade to any given technology. But, a commitment to using industry best instrumentation and systems in FDA-regulated research and clinical trials can stave off misgivings about a site’s commitment to quality.
Every organization faces unique challenges when it comes to implementing RBM. There is no ‘one-size-fits-all’ approach that will ensure success, as each organization has specific influencing factors such as therapeutic areas, existing processes, and technologies. However, there is tremendous ROI opportunity with effective RBM, including higher data quality and improved patient safety, greater resource efficiency, reduced on-site monitoring, and reduced study timelines (enrolment and time to database lock). The ability to identify anomalous data and site operational issues enables much more pro-active and efficient management in clinical data quality and patient safety, optimization of on-site monitoring and a significant reduction in overall regulatory submission risk. FDA supports and encourages development of systematic approaches that aim to improve clinical trial quality and efficiency.
Intelligent statistical monitoring platforms augment and support RBM processes providing sponsors with a unique ability to detect anomalous study data and develop mitigation plans to ensure the success of the trial. The FDA have recognized the value of such technologies and have agreed a CRADA (Cooperative R&D Agreement) with certain organizations, such as CluePoints, to help them better prepare for sponsor site inspections (FDA CRADA.) Having the ability to assess and detect data that is atypical (i.e. fraudulent activity, mis-calibrated equipment readings, sloppiness, poor understanding of data requirements, etc.) which cannot be easily identified through traditional EDC/SDV methods provides an organization with every opportunity to remove potential failings to file.
Integrating RBx software into clinical trials will enable sponsors and CROs to achieve positive results in terms of significant cost and efficiency savings, as well as giving piece of mind that the data is accurate and conforms to industry regulations.
- Tufts Centre for Study of Drug Development, July/August 2018 Tufts CSDD Impact Report
- JAMA, January 22/29, 2014, Volume 311: “Scientific and Regulatory Reasons for Delay and Denial of FDA Approval of Initial Applications for New Drugs, 2000-2012”
- Therapeutic Innovation & Regulatory Science 2014, Vol. 48(6) 671-680 “Evaluating Source Data Verification as a Quality Control Measure in Clinical Research”
Richard Davies joined CluePoints in September 2018 and is based in the UK. As VP, Solution Expert his role is to support organizations adopting CluePoints’ solutions from a technical, functional and business process perspective, particularly as they execute their vendor selection programs. Additionally, he has product management responsibilities and provides a bridge between customers, prospects and ongoing product development.