Home | Welcome to Contract Pharma   
Last Updated Wednesday, October 1 2014
Print RSS Feed

Oh Grow Up! Analytical Methods in the Age of PAT



Guidances developed for chromatography may not work for Process Analytical Technology. Don’t wait for them to catch up; use other methods.



By Emil W. Ciurczak, DoraMaxx Consulting



Published May 1, 2014
Related Searches: Validation Dosage Pharma Stability
Oh Grow Up! Analytical Methods in the Age of PAT
Related Features
Since the inception of commercial pharmaceutical production with associated product analyses, analytical methods have been designed to be run under controlled conditions (hence, the air-conditioned, well-lit, isolated laboratories staffed by technicians wearing lab smocks, gloves, and eye-protection). In the past decade, spurred on by the USFDA’s Process Analytical Technology (PAT) Guidance (draft 2002, final 2004), more and more companies are looking to characterize materials in the process stream.
While this approach may be somewhat new to many in the pharmaceutical industry, such assays have been common to most other industries for many years.

For example, in 1970, when I had a summer job at a paper mill, I was fascinated by the mid-range infrared monitor being used to measure the moisture in a moving paper web. Not only did the device monitor the moisture content, but was able to control the pressure on the rollers to keep the water level constant. Forty years later, the biggest part of the pharma industry is still “looking into” measuring moisture in real time.

As more and more analytical method developers look to move their analyses from the cozy lab to the harsh environment of the process line, they are looking for some guidance from either the government(s) or professional associations.
Unfortunately, most of the outlines for performing pharmaceutically significant tests were written for:
  1. the chemical quality of the API and excipients (ID, purity, heavy metals, residue on ignition),
  2. the amount of API and/or breakdown products in dosage forms (TLC, GC, HPLC), or
  3. the “performance” of the final dosage form (disintegration, dissolution, hardness, etc.). These tests were never intended to be a predictor of processability.
In an effort to harmonize analytical (and other) techniques, the ICH (International Conference on Harmonization) has issued a series of consensus Guidances. Since the Conference includes agencies such as the USFDA, EMA, and others, as well as “civilian” members, it is little wonder that Guidances take years to produce. The Guidance for validating analytical methods, ICH Q2 (R1) [incorporated 2005], was therefore developed only with current techniques in mind.

ICH Q2 (R1) is Meant for Chromatographic, Not PAT, Methods
Unfortunately, “current methodology” almost always refers to chromatographic methods. The Guidance lists very laudable parameters to be met, such as accuracy, precision, repeatability, and reproducibility. However, with their minds firmly set in current chromatographic laboratory methods, also included were Limit of Detection (LOD) and Limit of Quantitation (LOQ). Both are valuable when performing purity tests on APIs and stability tests on dosage forms, but not at all applicable to process tests where the API content is most often 10% or higher. [Ironically, in 1970, an HPLC method in a NDA (New Drug Application) or ANDA (Amended NDA) would have disqualified it. The FDA was dead set against LC, often saying “TLC or titrations are good enough.”]

Yesteryear’s Sea Change: From Titration to HPLC
The sea change came from the newest members of the Agency graduating with a thorough knowledge of the benefits of HPLC over titrations, especially for trace impurities. There is no denying that safety has been improved by chromatographic techniques of clinical and stability samples. However, for API synthesis and solid dosage form production, chromatographic techniques are not anywhere fast enough anymore.

This article will touch on alternatives that can be used for modern process monitoring.

API synthesis, controlled by spectroscopic methods (Raman, UV, MIR, NIR) seems to be easily accepted by regulatory gencies, since the final (bulk) product may easily be sampled and assayed by conventional chromatographic means.

Obviously, in the case of the API, trace impurities are critical to measure and control. However, once the purity has been established and the tabletting process shown not to cause degradation, all we need to monitor is the uniformity of API level from tablet (or capsule) to tablet.

Since we are not interested in measuring trace components (impurities) in production, the levels of detection and quantitation are superfluous. Most regulatory agencies would agree with the dismissal of these two parameters.

Another parameter is Range. The recommended ranges for standards are, for assay of a finished product, 80 to 120% of label claim or 70 to 130% for content uniformity. For an HPLC assay, where the standards are, in essence, simply API and excipients weighed into flasks, solvents added, and shaken, this is a logical request.

However, for a process measurement, samples of the product are not available with API content ranging between 70 and 130%. These levels may be achieved in two ways: generate deliberate OOS tablets on a commercial tablet press (triggering an FDA inspection), or generate them on a developmental press, usually in R&D or the pilot plant.  Either way, the tablets will, in almost all cases, be used “as is” for developing a calibration for process analysis.

There could, however, be difficulties in using artificial tablets for calibration. If a spectroscopic method is used, specifically Near-Infrared, any physical attributes will contribute to the spectra. That is, the hardness and density of the tablets will cause spectral variations; particle sizes within the tablet will cause variations; and API spectra will be slightly distorted by tablet variations.

Chemometricians often recommend that production tablets be used for calibrations. These will best reflect what is termed the “process signature” of the production-sized press and associated equipment. However, that leads to problems with another parameter: Linearity.

The linearity of the assay is actually based on the linearity of an HPLC assay, namely a simple Beer’s law plot of concentration versus Absorbance. A series of well-made solutions are made, using an analytical balance and “Class A” volumetric glassware.  A series of volumetric flasks is then used to make mixtures of API and excipients such that the final concentrations are equivalent to tablets containing between 70 and 130% (sometimes even 50 to 150%) of the label claim dissolved into that sized flask. The resultant solutions are claimed to be “recovered” drug and are used to build a calibration for either a UV or HPLC method. In fact, these API amounts were not actually recovered, but simply represent what would be recovered should such tablets be encountered.

The resultant Absorbance (or area-under-the-curve) versus concentration graph is used as the calibration for subsequent assays. Logically and theoretically, any such curve should be a straight line with a concomitant slope of 1.0000 (or 45 degrees) and a correlation coefficient of nearly 1.0000 (often required to have a R-value between 0.95 to 0.99 by the QA department).

Problems with traditional calibration methods
Unfortunately, there are several problems with this approach:
  1. The API is not actually “recovered,” so the method is not an indicator of what would happen should an analyst actually encounter a 130% (of label) dosage form.
  2. The extremely high R value resulting from a controlled Beer’s law calibration does not reflect the reality of a process measurement. In that case, the results from a reference method, e.g., HPLC, are plotted against the results of the process method, e.g., NIR, each having an error. [USP allows + 2% variation for duplicate injections in HPLC.] Achieving a near perfect R in this case is rare, indeed.
  3. Another problem is the concept of R as a measure of linearity. Since most other statistical tests are based on a negative (i.e., a Q-test tells whether a sample may be excluded; that is, does not belong to a group or a paired t-test tells us that one test is not equivalent to the one against which it is tested), using R to be a positive test leads to a false sense of linearity. Why? For one reason, without increasing either the precision or accuracy of an analytical method, the R may be improved by simply increasing the range.
Use durbin-Watson statistics, rather than correlation methods
Using synthetic data sets, proposed by Francis Abscomb in 19735 (Figure 1, left and Table, above) it can be seen that correlation coefficient is one of the least efficient means of finding non-linearities (remembering that statistics are meant to show variances, not similarities). A better method is the Durbin-Watson statistic.

The Durbin–Watson statistic6 is a test statistic used to detect the presence of autocorrelation (a relationship between values separated from each other by a given time lag) in the residuals (prediction errors) from a regression analysis. A short version of their work may be summarized as the residuals (differences between theoretical values and found values) are plotted versus the theoretical values (in increasing value).

The differences between adjacent residual values are summed and placed over the sum of the residuals. A value greater than 2 implies the plotted line is “non-linear.” A value less than two implies that the line is “not non-linear.” In keeping with “normal statistics,” this negative response does not prove linearity, rather it disproves non-linearity.

Why is this approach better? The resultant value is not influenced by the range of the sample set and is far better for process samples, not being created by synthetic samples and their inherent problems.

You might conclude that it might not be worth the effort to develop an in-line analysis, based on the difficulties and lack of documentation available. However, the control of a process and resulting product improvements make it worth the effort to perform “proper” statistical validations and enlighten the QA personnel at your company. The FDA has been making a concerted effort to train all their inspectors in statistical and MVA (multivariate analyses) in order to allow them to fairly judge your process controls.

PAT is well worth the time it takes to master it. The Guidances will catch up, I’m sure. It just doesn’t pay to wait for them. 

References
  1. U. S. Pharmacopeia (USP 37 – NF 32)
  2. ICH Q2 (R1) Guideline:  “Validation of Analytical Procedures: Text and Methodology”
  3. ICH Q2A Guideline: “Text on Validation of Analytical Procedures”
  4. ICH Q2B Guideline: “Q2B Validation of Analytical Procedures: Methodology
  5. Anscombe, F. J. “Graphs in Statistical Analysis,” American Statistician 27 (1): 17–21(1973) .
  6. Durbin, J. and Watson, G.,  (1950, 1951) [numerous references may be found online, and accurate references are on Wikipedia.]

Emil W. Ciurczak has advanced degrees in Chemistry from Rutgers and Seton Hall and has worked in the pharmaceutical industry since 1970 for companies that include Ciba-Geigy, Sandoz, Berlex, Merck, and Purdue Pharma, where he specialized in performing method development on most types of analytical equipment. In 1983, he introduced NIR spectroscopy to pharmaceutical applications, and is generally credited as one of the first to use process analytical technologies (PAT) in drug manufacturing and development.


blog comments powered by Disqus
Receive free Contract Pharma Direct emails
Sign up now to receive the weekly newsletter, and more!

Enter your email address:
Follow Contract Pharma On