Expert’s Opinion

Why Pharma Needs to Go Multivariate

Pharmaceutical operations, of all kinds, are simply too complex to rely on univariate thinking

I was just watching a health report on television about women who drink more than two diet sodas per day. The report was based on studying thousands of women over 50 over a period of several decades. There was an apparent increase in heart disease, ovarian cancer, and so on, in the diet soda drinking group, in some cases upwards of 50%, compared with women who did not drink diet soda.

No mention was made of any other potentially contributing factors (weight, smoking, diet, economic status and access to health care, etc.). This approach: isolating a single cause and (apparent) effect has been used for many, many years on many, many studies. Should it be used in pharma?

This reminded me of a scene from a Batman movie, in which the Penguin, running for mayor, notes that, while he is always in the company of law-enforcement personnel, Batman is always seen with criminals. The joke, of course, was that he is a criminal being arrested while Batman was arresting criminals. Surprisingly, simple correlation coefficients are used in all levels of the pharmaceutical industry. [Keep in mind, my statistics professor pointed out, that a better than 0.9 correlation exists between the rise and fall of the Mississippi River and the birth rate in Paris, France. How much can we trust simple one-to-one correlations?]

What is the relevance of this to a pharmaceutical manufacturer? It’s extremely relevant, especially as we move to actual process control via on-line and in-line instruments (PAT). An HPLC analysis, for example, is a “univariate” analysis, meaning you only get one piece of data; in this case, the chemicals within a single dosage form. Since you need to destroy the tablet, you no longer know its dissolution profile, etc. It also goes without saying that HPLC is hardly capable of giving real-time data for process understanding or control. [if you include sampling, labeling, transport, logging the sample, preparing standards, conditioning a column – after preparing the mobile phase, preparing the sample, filtering the sample, injecting the sample, assessing the results, and reporting the results, we could take a full day for an assay.]

To follow a process in real time, a fast, non-destructive analysis technique is needed. For solid dosage forms, this almost always means spectroscopy. The chief technologies that are used are near-infrared (NIR), Raman, and to a lesser extent, light-induced fluorescence (LIF). All these techniques are used on the complex mixture that is the dosage form. These methods “see” everything in the mix: API(s), excipients, particle size(s), tablet density, etc. As a consequence, the spectrum contains far more than the chemical information needed for an API assay. The assay or API content is buried and is often only a small contribution to the tablet spectrum.

Thus, if you only wish to follow the API content as the dosage forms are produced, you also need to account for the other parameters. This is often far more work than an HPLC assay; you need to measure hardness, weight, and thickness, among other parameters. The first good news is that you can now glean the amount of API in milliseconds, not hours. This means you don’t have to make a few million tablets then find that you values are high/low or Content Uniformity is terrible. Adjustments can’t be made after the product is finished…except to throw it away (or, if you’re extremely lucky, rework it).

Of all the difficulties encountered in developing a process analysis, the hardest to overcome is your own QA department, not the FDA or EMA. While the Agencies have accepted the concepts of ICH Q8, 9, 10, and 11 (they were, after all, part of ICH), the majority of QA departments (staffed with the people voted “most resistant to change” in their high school yearbooks) tend to drag their feet. This is because they have grown up in the “HPLC for everything” era, with all references to method validation planted firmly in the 1980s.

The most egregious concept is Range. The ICH Q2 (R1) Guidance for Validation specifies a range of 80 to 120% of label claim for assays and 70 to 130% for CU. While these are fine for HPLC methods since they don’t actually require dosage forms in this range, merely that the LC method show “linearity” (I’ll get to that fiasco later) in that range. This is done with a series of artificial solutions, injected into an HPLC system. The whole calibration is done with synthetically made solutions. Unfortunately, to perform the process analysis for the in-process system, actual artificial tablets need to be pressed.

This is where the uni- versus multivariate approach comes to the fore. Since most chemists and, apparently, all QA people are merely looking at the amount of API, this seems like a good idea. Unfortunately, real materials in a real setting don’t follow ICH rules. To illustrate why “common sense” is usually wrong in the multivariate world (much like quantum mechanics doesn’t seem logical in the macro-world), let me propose a thought problem (like one of my heroes, whose initials were A.E.).
Suppose you make a blend that is absolutely well-blended and does not separate during processing. The only way a tablet could have 70, 80, 90, 120, or 130% of label claim would be to….. wait for it….. make tablets from 70 to 130% of the weight. How likely is it that that could happen in the real world? I imagine that even the most inexpensive tablet-weighing device could detect a 130% error, no? So, to make these synthetic tablets for calibration (merely to please QA), we need to vary the ratio of API(s) to excipients, allowing the tablet weights to remain the same (really?).

One problem with that approach is that the APIs and excipients seldom have the same density; therefore, the same volume of granulation is admitted to the punches (because the tablet press is set to allow the same volume, over and over) may well not have the same mass. And, should the rare case where the mass is equal, there is probably little chance that the new mixture will compress in the same manner. Why does this matter? Remember that you are now looking at all the physical and chemical properties of the tablets? Well, compression differences immediately affect the density, hardness, and particle sizes of the ingredients. Throw in weight differences and we do not, in actuality, have calibration samples that represent the product. If we use a different tablet press, the differences are even greater. (1, 2)

What was found was that accuracy and precision were not improved by merely extending the range. In almost every case investigated, the only statistic improved by extending the range was the correlation coefficient. That alone should show that R or R2 are “feel good” numbers that may or may not have relevance in the validation of the method. A further discussion is included in references 1 and 2.
What is needed to know from this discussion is that traditional analytical methodology is not suited for process control. A lot of pharmaceutical professionals are fearful of things that are different for a number of reasons. While just about every Agency and process engineer has been convinced of the value of real-time analysis and control, the (traditionally) most conservative group still needs to be dragged (kicking and screaming, apparently) into the 21st century. Now, repeat after me: “Change is good, change is good, change is good…”

References
1.     E.W. Ciurczak, G.E. Ritchie, R. Roller, H. Mark, C. Tso, and S. MacDonald.  “Validation of a NIR Transmission Spectroscopic Procedure, Part A: Validation Protocols,” J. Pharm Biomed Anal 28 (2), 251-260 (2002).
2.     E.W. Ciurczak, G.E. Ritchie, R. Roller, H. Mark, C. Tso, and S. MacDonald. “Validation of a NIR Transmission Spectroscopic Procedure, Part B: Application to Alternate Content Uniformity and Release Assay Methods for Pharmaceutical Dosage Forms,” J. Pharm. Biomed. Anal., 29 (1-2), 159-171(2002).

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters