In considering the above scenario, it is critical that a firm’s investigation procedures have controls in place to provide assurance that the investigation is not only thorough, but also that all potential causes have been evaluated, and that through hypothesis testing suspected root causes were either proven or disproven. In doing so, the firm is protected from any accusations of invalidating “inconvenient” data without a justified root cause to allow the release of material.
The cornerstone of an investigation is the root cause analysis since the identified root cause will drive the impact assessment of the investigation and form the basis for any necessary Corrective and Preventive Actions (CAPAs). The October 2006 FDA Guidance, “Investigating Out-of Specification (OOS) Test Results for Pharmaceutical Production,”1 indicates that the focus of the initial Phase I Laboratory Investigation is to confirm the accuracy of the results and rule out the hypotheses that the cause of the OOS was a laboratory preparatory error or alternatively, due to instrument performance. The October 2017 MHRA Guidance, “Out of Specification & Out of Trend Investigations”2 references an initial hypothesis test plan when the original working stock solutions can be tested to confirm there was no preparatory error—but should not include preparing a solution from the original sample which would occur as part of re-testing. Such initial hypothesis testing should confirm that the laboratory method was followed as written, that there were no laboratory errors, and that the associated instrumentation was functioning as required.
When considering potential laboratory root causes, another consideration is the capability of the analytical method, which is directly linked to the associated analytical method validation/verification. The capability of the analytical method addresses whether the total measurement uncertainty of the method is appropriate for the corresponding specification range. Evidence of such a root cause should also be provided from a trend assessment of that method—where previous similar investigations for the same method would be expected—as the afforded result from that method could be within specification or OOS with insufficient certainty of what the true result is. This highlights the importance of periodic performance monitoring of analytical test procedures, a key component of the lifecycle control of test procedures, that can be an invaluable source of information when conducting root cause analysis.
Investigation root cause analysis should consider all associated data/results and address any inconsistencies. For example, if it is suspected that the root cause is due to instrument error/performance, but the method’s system suitability met the test procedure requirements, then the investigation must address such a dichotomy. For the provided example, the cause of the dichotomy may be due to a deficiency in the method’s system suitability criteria, which in turn may necessitate a corrective action to enhance system suitability requirements via a change control. Such a corrective action would ensure that going forward such instrument issues will be “flagged” by the system suitability prior to generating sample data. An investigation should holistically assess the totality of data, so as an example, for an API assay OOS, the balance of release testing data (e.g. impurity content, moisture) must be evaluated to determine if there is evidence that supports a material quality root cause where there are material attributes that are depressing the material’s assay value or if there is another potential root cause.
If, during the investigation, a laboratory “error” is identified, to assign the error as the root cause, it must be demonstrated how the error can account for the OOS. For example, if a dilution error when preparing the sample solution within the laboratory was identified during the OOS investigation, then in order to confirm that this error was the root cause, the investigation would need to demonstrate that the dilution error can account for the OOS result. It is imperative that the investigation does not assume that the error is the cause, but rather the investigation must demonstrate how the error caused the result that led to the OOS. In circumstances where there is a clear evidence of the error (e.g. incorrect glassware used) and the error can explain the result that was obtained, then hypothesis testing may not be required to confirm the cause.
However, if the error is a suspected cause, then a pre-approved hypothesis test plan is required to demonstrate the relationship between the suspected cause and the result. For example, a stability OOS where there is evidence of a packaging failure and thus the suspected cause is an ingress of oxygen into the sample headspace affording oxidative degradation, the investigation would need to present data to demonstrate the relationship between the packaging failure and the resulting oxygen content and the impact to the resulting stability test data, such as the sample impurity profile.
The hypothesis test plan should pre-define the expected criteria/results to prove or disprove a hypothesis. This should also include the scientific rationale or justification for such criteria. Within the test plan, generating data that meets material specifications does not necessarily confirm a root cause. What is critical is that the test plan demonstrates the correlation between the suspected cause and the material attribute that is being measured, which is resulting in the OOS.
It must be understood that hypothesis testing is not only limited to a suspected laboratory root cause but can be required during a manufacturing investigation. For example, if it was noted during the OOS investigation that there was an atypical processing, such as time, temperature, pressure etc., then a suspected root cause may be derived in terms of potential impact of that processing variance to the material attribute, however, hypothesis testing would be required to confirm such a suspected cause. This again is based upon generating data on the processing condition and the suspected impacted material attribute and demonstrating a correlation.
If a confirmed laboratory root cause is identified which supports the invalidation of the OOS result, then testing to replace the original OOS result can be justified. However, prior to conducting testing to replace the original OOS result, it must be considered if a laboratory CAPA is required prior to the repeat testing, and if not, justification should be documented as to why such a CAPA was not required.
The question that must be asked is how can it be confirmed that the repeat testing was not impacted by the OOS root cause? For example, if there is an OOS where the root cause was attributed to a leakage of the Karl Fischer chamber allowing moisture ingress resulting in a false positive result, then a CAPA must be considered whereby the system performance test is required to confirm the suitability of the Karl Fischer unit prior to generating sample data. Otherwise it cannot be assured that the repeat testing data is also not impacted by the root cause. Stating that the repeat testing data meets specifications is not an adequate justification as data that meets a material specification can still be impacted by the root cause. If the root cause of an OOS was assigned to inadequate instrument performance, then corrective actions may include enhancement of the functioning of the instrument via change control—where the effectivity is demonstrated via instrument qualification—and to conduct the repeat testing with enhanced method suitability to confirm ongoing suitability of the instrument.
For a capable manufacturing process, a proficient laboratory, and a well-trained staff, the occurrence of an OOS incident is not expected, and as such, when a case arises, every effort should be made to determine the true root cause. Only when the root cause has been determined can one determine the impact and assign CAPAs with confidence. Therefore, the root cause needs to be specific; as an example, when addressing an extraneous chromatographic peak investigation, the peak should be identified to confirm the source of the impurity. Such specificity with the root cause will facilitate the determination of impact and implementation of CAPAs.
If, for an extraneous peak investigation, the identity/source of the peak is not confirmed and the source is assigned to a probable root cause of “laboratory contamination,” then the determination of impact to previous laboratory activities and the assignment of an effective CAPA is going to be a challenge, as there was no specificity to the assigned root cause. This is also the case for investigations where the root cause is assigned to the non-specific “human error.” When considering “human error” as a potential cause, questions should be asked as to why there was such an error. In these cases, root cause analysis tools such as fishbone and 5 ‘whys’ can be invaluable as a means to determine the underlying cause for the human error. When dealing with a “human error” type investigation, trending is paramount to determine if there have been previous incidents of such ‘errors’ and thus the need for a CAPA to address any underlying root cause.
It is understood that in rare cases for an OOS investigation, a root cause cannot be confirmed either from the laboratory or from the manufacturing investigation resulting in an inconclusive investigation. The FDA OOS Guidance1 states that for such inconclusive investigations, QA is to evaluate the totality of data and determine if the OOS result is reflective of the material quality—with the understanding that the firm is to “err on the side of caution” when making such batch disposition decisions.
Considering the Guidance, it is critical that the firm’s investigation procedures provide explicit instructions for how QA is to evaluate the totality of data and make an informed decision. There needs to be recognition of the fact that releasing material where there was an OOS result (the cause of which cannot be confirmed) is a high-risk activity and requires robust justification that includes confirming the manufacturing process is capable/in a state of control and a commitment that there will be trending for any future occurrences. The number of inconclusive investigations as a percentage of total investigations can act as a key metric to trend, as this could be an indicator on the quality of the firm’s investigation program.
In conclusion, it is paramount that a firm’s investigation program requires pre-defined test plans to confirm any suspected root cause whereby the correlation between the cause and the impact to the material attribute/reported result is demonstrated through data.
- U.S. Food and Drug Administration, “Guidance for Industry - Investigating Out-of-Specification (OOS) Test Results for Pharmaceutical Production,” October 2006, https://www.fda.gov/media/71001/download
- Medicines & Healthcare products Regulatory Agency, “Out of Specification & Out of Trend Investigations,” October 2017, available for download at https://www.gov.uk/government/publications/out-of-specification-investigations
Paul Mason, PhD
Paul Mason, PhD, is a Director in the Science and Technology Practice at Lachman Consultants who has 20+ years of experience in the pharmaceutical industry. He is a quality control chemist experienced in sterile parenteral, API, and solid oral dosage forms. His experience spans finished dosage form, CMOs, and API (intermediates) manufacture support in both a quality control and analytical development setting.