Analyze This

Errors and Waste

You always have time to get it right the second time.

Author Image

By: Emil W. Ciurczak

Independent Pharmaceuticals Professional

For some time now, I have been preaching about saving time, speeding up analyses, increasing output, and needing a bigger bank vault as a result. However, this time I need to discuss errors and how some speed can be bad.

First, let’s discuss the elephant in the room: ALL measurements are estimates! Take, for example, a simple ruler. Even if the lines come to both edges of some material we are measuring, the lines, themselves, have a width. A true line is one dimensional, not having any width. That means we estimate where the edge of the item ends and begins. Using the same ruler on the same object, if we have 100 students measure the object, there will be a Gaussian distribution of values generated—assuming no actual mistakes/errors were made in the measurement.

This error analysis works for weight, volume, and even counting large numbers of items (tablets, capsules, birds, cattle). This occurs even when the operator/technician/chemist is scrupulously careful.

Imagine when this same person is in a hurry or just being lazy! When an Out of Specification (OOS) investigation is mandated, the actual cause of failure is not always uncovered. The rule-of-thumb is to attribute the failure to “operator error.” Not always fair, but it gives us closure.

One potential source of failure is the strictness of GMP rules. After generating the mandated “minimum” of three demonstration, commercial level batches for a new drug application (NDA), the production parameters (i.e., blending time, granulation time, drying time, etc.) are strictly enumerated and expected to be followed—no matter whether they are the optimal times or not. A few years ago, I was performing a practice inspection at a client’s location to ready the company for an upcoming one by the FDA. While chatting with one of the consultants, he mentioned an interesting statistic.

During a recent inspection of another company’s failing batches over several years, they saw that the majority of batches failed when the newest operators were doing the production. The “obvious” conclusion would be that these newer workers had not received the correct amount of training. However, upon digging deeper, it was discovered that the older operators knew which portions of the MMF (master manufacturing formula) were in need of “tweaking.” They made the needed corrections, but, since GMP rules do not allow for “creativity,” they couldn’t make note of these changes. The newer operators, not being privy to these data, proceeded to follow the MMF as written and proceeded to generate numerous failing batches. Now, do you see why I so strongly support QbD? A validated design space would have given operators leave to correct any shortcomings in batches.

The methods used

Another cause of error is found in the analyses we are using. I have been to smaller labs where they simply apply compendial methods (USP, ASTM) without ever formally having proven that it works for their products’ formulations. At one location, an ASTM titration was used for a raw material—for hydroxy number—where, upon inspection, I noticed the solvents used did not totally dissolve the sample! We need to keep two things in mind:

1. Most compendial methods (e.g., ASTM) are not designed for drug products. They are general methods, designed to be simple for use in any industry. BUT they should still be validated for YOUR specific use.

2. The USP was written for compounding pharmacies, not industrial production. A pharmacist, working in his corner pharmacy, needed simple, easy to run tests, not sophisticated technology. Since he/she would be making a small batch of pills or capsules, there was no need to perform an assay.

Also, even when the first industrial Pharma plants were built to make drugs for the masses, the batches were small by today’s standards. So, in those days, assaying 10-20 doses from a batch was sufficient. I am not going to get into what a “statistically significant” sample size is, but I feel that you would agree that 20 tablets from a 6,000,000-tablet lot is “statistically significant.” Hence, I vote for QbD and continuous manufacturing.

OK, now let’s look at the dissolution testing of tablets for release or stability. When I began my trek through the Pharma world, there were no “standard” dissolution methods, to speak of. Early attempts were designed to mimic the uptake of the API in the body. This led to some “interesting” tools: slowly rotating bowls to see the intrinsic dissolution, tablets resting on glass beads with a laminar flow of liquid, actual tubing that mimicked the intestines, etc. Eventually, we discarded the enzymes (pepsin mimicked the gut and chymotrypsin mimicked the intestine) and settled on clean solutions, most often distilled water at 37°C. Also, we narrowed the equipment to either a rotating basket or a rotating paddle with the dosage form on the bottom of the vessel. The standard vessels, paddles, baskets, volumes, etc. were outlined in the USP. That made life simpler, sort of.

The “80% in ten minutes” limit for passing an immediate release (IR) tablet seems straightforward: equilibrate the water, rotate the paddle at the required RPM, and drop the dose into the vessel. At the specified time, withdraw a sample and assay it. But if the compendial limit is being approached (e.g., a slower release than previously seen), sampling location becomes quite critical. Figure 1 shows a concentration gradient for a typical USP paddle apparatus. For a rapid dissolution, almost any position for a pipette would suffice; for a near-failure, position is critical.


Figure 1. The concentration gradients within a USP dissolution vessel. Not major differences, but the values could affect a pass/fail test.

Clearly, larger companies with larger labs often use “sippers” inserted and remaining throughout the test. It would be expected that these would be less susceptible to this error, but where the sippers are placed still affect the values at each time period. Even carefully inserted tubes might 1) be at slightly different depths, seeing different concentrations, and 2) could affect the flow of solvent, affecting release rates. Clearly, all these can be accounted for in a well-designed tester. These minor differences between the actual doses and the tester, itself are mostly accounted for by doing at least six doses simultaneously.

The math used

We use math constantly in the industry, from simply weighing materials to computing the assay values. For those of you who are adding PAT-themed assays (predictions), the math gets quite complex. Beer’s law uses high school math, FT-IRs use a Fourier transform—sines and cosines, also high school math. If you begin to use multi-variate analyses (heavy statistics, Principal Components, Partial Least Squares), you are working with matrix algebra, a graduate level math.

Beer’s law is a graph of Absorbances resulting from a series of known solutions of a standard chemical (API). They are placed in a known pathlength cell (cuvette or HPLC flow cell) and the calibration curve is built where the concentrations are the X-axis and the Absorbances are along the Y-axis. An unknown is treated in a similar manner and the Absorbance is checked along the calibration curve and the concentration “discovered.” When more than one analyte is present, simultaneous equations (algebra, another high school math) are used to glean the individual values. The goodness of both the calibration and the analysis is seen in the correlation coefficient.

However, should you use newer techniques (e.g., Near-Infrared or TeraHertz), you will need to use multi-variate methods. In short, you will be using NIR to “predict” the API, or whatever parameter is being sought, and you will need a compendial method for the NIR calibration. The resulting correlation coefficient is the comparison of two individual methods, each with its own errors.

In the context of simple linear regression: (e.g., Beer’s law)

R = The correlation between the predictor variable, x, and the response variable, y.

R2= The proportion of the variance in the response variable that can be explained by the predictor variable in the regression model.

And in the context of multiple linear regression: (MLR, PLS, PCA)

R = The correlation between the observed values of the response variable and the predicted values of the response variable made by the model.

R2: The proportion of the variance in the response variable that can be explained by the predictor variables in the regression model.

While the correlation coefficient is useful for showing a correlation between two “happenings”, it cannot be used for proof of causation (see Figure 2) or linearity. This is why it’s called a calibration curve. Some technologies, such as ion selective electrodes or titrations have curved or S-shaped calibrations, not straight lines.


Figure 2. Over-dependance on Correlation Coefficient can lead you down a rabbit hole.

These are just some potential areas where care is needed for accurate analyses. 


Emil W. Ciurczak has worked in the pharmaceutical industry since 1970 for companies that include Ciba-Geigy, Sandoz, Berlex, Merck, and Purdue Pharma, where he specialized in performing method development on most types of analytical equipment.

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters