Analyze This

We Need to Get Back to Basics

In life, and the lab, the calculator/computer is a blessing and a curse!

Author Image

By: Emil W. Ciurczak

Independent Pharmaceuticals Professional

During one of the four Nor’easters we just experienced—in three weeks time—I was without power for only 1.5 hours. Fortunately, I already had a nice fire in the fireplace, a fully charged phone for emergencies, my standard land line (always have power there), battery lanterns, and some good books. When I told my grandsons about it, the thought of NOT having their “screens” was terrifying. This got me thinking about where we have come from the basics in chemistry and math.

Occasionally, my wife will ask me to do some “numbers” in my head. While she is good (tutored math in college), she is a child of the electronic age and has to use a calculator. Being a dinosaur, I was required to memorize my “times tables” in grammar school and, truth be told, my dad taught me to do long division by the time I was four. Thus, I can do a reasonable job with numbers sans machines.

The calculator/computer is simultaneously a blessing and a curse. It speeds any computation, right or wrong. The acronym GIGO (garbage in; garbage out) is ever so true: whatever you place into a calculator, it will crunch, no matter how bogus. When teaching analytical courses, I insisted that the lab report showed one entire calculation in long form; the remainder could be done via calculator. The reason was simple: one only taps digits into a calculator, without thought of any units. Writing one equation longhand ensures that the student (chemist, process engineer, etc.) addresses the concept of proper units, at least once. That is mg vs. mg, mm vs. mm, adjusting the units as they go along (e.g., 1000mg/g), so that the answer emerges with the proper dimension. Hence, the term “dimensional analysis” for the art of making the units correct.

Another “problem” seems to be understanding the implications of various words, from singular to plural—datum and data, spectrum and spectra—or whether a word is a technique or calculation—reflection vs. reflectance; transmission vs. transmittance. More than mere grammatical errors, these are manipulative errors and imply a lack of understanding. With the industry using more and more math and statistics (PAT/QbD requires them, as does multivariate analyses for NIR, Raman, etc.), using terms and manipulations correctly becomes more important than ever.

Before real-time calculators/computers started doing the math, more time was taken with our analysis. No one wanted to have to repeat an hour-long sample prep, so more care was taken. The results were hand-recorded in a lab book, then calculated afterwards. Any math errors could then be rooted out, after the fact. With rapid, in-process measurements, we do not have the luxury of double-checking our readings or calculations in real-time. This puts more emphasis on the efforts of generating proper equations and keeping the equipment calibrated. Unfortunately, like the students who simply type numbers into the calculator, many people using modern algorithms for NIR, Raman, etc. are following the rote ideas left over from lab-based analyses.

One such instance is Linearity. To quote the FDA NIR Guidance, “To evaluate linearity, analytical results for the external validation samples obtained using the NIR analytical procedure should be compared to the results obtained using the reference analytical procedure. When the values are plotted over a suitable range, a correlation coefficient close to 1.0 and a y-intercept close to 0 indicate acceptable linearity.”

Unfortunately, this is not as useful for NIRS; it is a carryover form HPLC protocol, where samples are made with analytical balances and volumetric flasks. The solutions are run in standard cuvettes and all parts of Beer’s law apply: constant, known pathlength, no interaction between analyte and matrix (solvent), no Absorbance of the light by the matrix, etc. The graph of concentration versus Absorbance (usually at the point of maximum Absorbance = lambda max) is virtually assured of having a slope of ~1.0 and running through zero (origin). In the real world, or multi-variate world, things are slightly different.

The “linearity” graph, is based on two different methods for its axes: the NIR (or Raman) method (Y) and the reference (HPLC) method (X), each with its own sources of error. The balance and flasks are several steps removed from the HPLC value and were never present in the NIR method. To make matters worse, the correlation coefficient was never meant to define linearity, mere coincidence of happenings (correlation is not necessarily causation).

Largely, statistical methods are meant to disprove something. A Q-Test shows that an analytical result does not belong to a group, a paired t-test tells us that the results from one method are not equivalent to those of another, etc. R or R2 only tells us there is a coincidence/correlation between two sets of numbers. PLUS, one can improve a low R2 value by merely extending the API range within the samples, increasing the correlation without improving the accuracy (SEP). The fact that the accuracy of the method does not improve with an improved R2 should be enough to dispel its importance.

What I usually recommend is the Durbin-Watson statistic.1 Basically, the residuals (+/- difference between actual and found) are plotted vs. the “actual” values and a “best fit” line is drawn between the points. The sum of the square of the differences between adjoining points is a numerator; the sum of the squares of the differences between the line and the X-axis is the denominator. If this factor is greater than 2.0, the data is non-linear. If less than 2.0, the data is not non-linear. As with most statistics, the word “linear” does not appear. Better still, l it works with very limited ranges (production samples).

For Detection and Quantitation Limits: “If the NIR analytical procedure will be used near the limit of its detection capability, LOQ [limit of quantification] or LOD [limit of detection] should be determined. Examples include the analysis of minor components or drying end-point detection.” Rarely does someone attempt to perform trace analysis with NIRS. For 35 years, I have known that there is a practical limit of ~1% in a solid/powder sample. Since most dosage forms are < 5% API, this seems superfluous and an extra burden for analysts.

Another parameter is the Range. Since they still have the HPLC mindset, “As recommended in ICH Q2(R1), the appropriate range for validation studies should depend on the attribute being evaluated.” This translates into 80-120% of label claim for assay and 70-130% for content uniformity. There is no problem for chemical extractions (titration, HPLC), since the “samples” are often merely the API and excipients weighed into a flask, solvent added, then the API is “recovered.” The actual dosage forms for those ranges are seldom made.

When assessing a process, making these actual tablets (or capsules) may be problematic. One case, where we performed transmission NIR, we were only able to 80-120% of label claim as making the tablets became difficult (50% drug). In addition, their “process signature” (hardness, density, weight) was different, as well as the chemical make-up. This required Chemometric gymnastics to make a working equation. The extra effort is due to what may be termed, “process signature.” That is the type or tablet compression speed of compression, etc. all affect the physical structure of the tablet.

Synthetic samples not only have different chemical constituents, but different physical properties: tablet weight, density, hardness, friability. All these contribute to the “signature.” Thus, the synthetic tablets, likely made in smaller portions on different presses, made to satisfy the range requirements, will likely not be identical to the process lots, forcing Chemometric manipulation.

Returning to the theme at the beginning, many other steps in analyses are faulty or even wrong, while the staff becomes more and more interested in learning the software and hardware in lieu of “merely” studying the wet chemistry part of the analysis. Some examples are as simple as de-gassing HPLC solvents. One common technique is to pull a vacuum over a stirred flask while stirring the liquid. Ok for water since volume changes do not change the concentration (still 55.5M); vacuuming a buffer or salt solution will concentrate the salt concentrations. With a water/solvent mixture, the vacuum will preferentially pull off the organic portion, changing the ratio. Many SOPs do not specify the time for vacuum treatment, leaving the decision to each technician.

In the same vein, I have observed analysts make an organic solvent/buffer mix by first mixing the water and solvent, then adjusting the pH! Any college worth its name teaches that pH, by definition, is ONLY valid in a water solution and if we want to be pedantic, only a dilute solution. Adjusting the pH of an organic/water mixture is meaningless. I’m sure many an assay need to be adjusted, for “no apparent reason.”

In short, what I am asking is not to forget all that we learned as students, the moment we get our first job. Please question everything, including “who wrote the Guidance?”

References

  1. https://en.wikipedia.org/wiki/Durbin%E2%80%93Watson_statisitic


Emil W. Ciurczak
DoraMaxx Consulting

Emil W. Ciurczak has worked in the pharmaceutical industry since 1970 for companies that include Ciba-Geigy, Sandoz, Berlex, Merck, and Purdue Pharma, where he specialized in performing method development on most types of analytical equipment. In 1983, he introduced NIR spectroscopy to pharmaceutical applications, and is generally credited as one of the first to use process analytical technologies (PAT) in drug manufacturing and development.

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters