Let’s consider how this could work. In Figure 1, different levels of parameter 1 and parameter 2 lead to progressively different values in tablet dissolution and friability. Where the two curves overlap is the design space in this example. So finding the ranges of parameter 1 and parameter 2 which lead to final product friability and dissolution within specification is how you would want to establish and control your process.
How about data management, and ensuring the fidelity of data? The Shared Health and Research Electronic library (SHARE) is a hallmark of standards-based automation. These systems seek to enforce data content by their very design, and so mistake-proof the data which enter the protocol.
Feasibility questions still remain, principally: How to maintain the historic rigor when the QbD and risk-based approach dictates some fundamental shifts in how trial design is done—making the process less isolated and compartmentalized and more engaging and interactive with the primary data source (the study participants themselves). The Trial Master File will do just that.
The following Quality by Design principles will be very important:
- The protocol itself and how it’s written
- The change process for any protocol amendments (how standardized and objective is the identification and selection of amendments process?)
- The patient recruitment program
- Subdivision of patients for treatment and control as well as cross-over if needed
- The supply of materials for the trial itself to the trial sites
- Site selection and oversight
- The system of capturing and collecting data during the trial
- The variability introduced by the a priori assumptions prior to the trial
Additionally, there have been a lot of attempts at incorporating Quality Risk Management into the site study operations—this has largely succeeded on the following bases:
- Ensure that all decisions are made after being vetted by this approach. Include all decision-making, even those issues which seem at the outset to be too trivial to be assessed in this manner.
- Never pre-specify risk thresholds for action.
This is the same idea as if you were to conduct null hypothesis significance testing for a study and then after you saw what the p-value was, you determined what your significance level () should be.
So let’s say your two-sample t-test results returned a p-value of .052 (and you thought ‘Ahh—close enough’), or it was .081 and you decided that given the input data quality, a significance level of .1 should be used —in either case, you’re violating the fundamental basis of hypothesis testing and you’re meddling with the false-positive and false-negative detection of your particular study.
A list of current challenges with implementation and proposed solutions developed by the attendees is captured here: http://www.cbinet.com/sites/default/files/files/Roundtable%20Notes.pdf. Has your organization evaluated QbD methods for use in its clinical trial design processes? Please share your thoughts.
Dr. Ben Locwin
Healthcare Science Advisors
Ben Locwin, PhD, MBA is President of Healthcare Science Advisors and is an author of a wide variety of scientific articles for books and magazines. He is also a frequent speaker and consultant for a variety of industries including behavioral and psychological, food and nutrition, pharmaceutical, and academic. Follow him at @BenLocwin.