Emil W. Ciurczak, DoraMaxx Consulting03.09.16
In my last column, I addressed the importance of properly transferring analytical methods and hardware from initiator labs to CMO or secondary labs. While the methodology, itself, is a challenge, the potential interference from supply chain differences is equally problematic. While the hardware, software, and SOPs may transfer successfully, are they measuring what you think they are measuring? I once attended a course in Time Management. There, we were taught the difference between efficient and effective: efficient is performing the job properly; effective is performing the proper job.
Let’s examine a very simple, specific example of why proper tests/chemistry matter. Assume a manufacturing site with a single supplier of the most commonly used excipient: lactose. Commonly used tests (e.g., USP, EP, and JP) are designed to verify that the material is the chemical entity “lactose” and has no metal or other toxic interferences, which should be sufficient, if the company has a good history with that one supplier. However, when expanding to multiple suppliers and multiple sites, these tests may have trouble categorizing the true identity of the type of lactose purchased—anhydrous, hydrous, free flowing, spray-dried, and freeze-dried—let alone the “processability” of the material.
Typical compendial tests include polarimetry to demonstrate a negative bend of the polarized light although the way it is written, there could be errors, a sieve analysis to show that “less than 1% is retained on a 100 mesh screen,” heating with a copper salt—a red color means the material is a “reducing sugar”—and a quick mid-range infrared spectrum. None of these show exactly which grade you have nor how well the material will behave in the manufacturing process.
One specific problem is that the USP allows up to four percent moisture in anhydrous lactose. So, depending on the sampling technique and time of year (humid or not) and storage conditions, a given bag of lactose can either pass or fail as “anhydrous.” Indeed, the tests, themselves, may contain large enough errors to call the results into question. A loss on drying result of 3.9% + 0.2% water makes the decision of passing or failing a judgement call. And, if one of the major excipients is allowed to have up to a four percent error—pure anhydrous lactose versus 96% lactose / 4% water—are the operating instructions specific enough or even include calculations to account for the excess moisture that will be lost in the subsequent drying step? Has anyone in the company even considered that there is a difference from month-to-month in their major excipient?
These are hardly moot questions. One simple example and true story will suffice to show how important specific, detailed instructions are to a successful analysis. I worked at a company where the SOP for an HPLC method simply stated “prepare a 50% methanol/water mobile phase.” It goes on to include filtering and degassing, of course, but that first part was, unfortunately, ambiguous. As a consequence, there were variable results for the same lots of drugs. When these problems happened, the operators simply claimed it was due to columns, pumps, injectors, etc. So, I investigated the problem in depth.
After observing a number of techs performing the method, I noticed three distinct interpretations of the supposedly straightforward prep instructions: 1) Some techs measured 500 mL of water into a one liter volumetric flask and brought the mixture to volume with methanol; 2) Some measured 500 mL of methanol into the liter flask and brought to volume with water; 3) And yet a few (the better trained techs) measured 500 mL of water into the flask, then measured 500 mL of methanol into the same flask, then mixed.
Thus, I prepared three distinctly different solutions that could, under various circumstances, be labelled “50% Methanol.” It does not take a physical chemist to understand that a water-methanol solution both warms and shrinks, simultaneously. Bringing it “to volume” will give about a five percent error; the direction of the error, of course, depends on the solvent with which you adjust the volume. This will, of course, give very different HPLC retention times and separations. All of which were blamed on the pumps, columns, or injectors, just not the people making the solutions. A simple, detailed SOP was written (spoiler alert: #3) so all techs followed the same procedure: problem fixed.
The actual physical plant of the overseas lab can also play a large part. A number of years back, we had developed a number of gas chromatographic methods on what was, at that time, a state-of-the-art instrument, equipped with on-board controller microchip. The plant in Germany didn’t find it acceptable and wanted to work with a different vendor. While this is not as critical as a NIR or HPLC unit, we still wanted to know what the problems were.
After contacting our sales rep from the instrument company, it was determined what happened: the German company was so large that it had to generate its own power to supplement the city-provided electricity. Turns out that the grid had a “floating” ground while the instrument was conventionally grounded, meaning it wasn’t actually grounded. Every time one of the larger production units was turned on or off, there was a surge, which wiped the instrument’s memory. This was before universal power supplies were ubiquitous.
Here also, being a physical chemist helped. We had them build a simple Faraday cage around the instrument—far easier than rewiring the whole building—and, voila, the instrument performed as stated by the manufacturer and they purchased four more immediately.
What is needed is, quite simply, the same due diligence that is used when inspecting a potential supplier’s facility, assuring the Compliance or QA department that cGMP’s are being observed and followed. When a method is transferred to a second facility whether wholly-owned or a CMO, the initiator needs to visit the lab or process, if an in-process PAT monitor is used, and check every detail, including how the operator will: 1) install the instrument, including performing an IQ/OQ; 2) use the instrument on actual samples—including any sample prep, making any solvents, performing calculations; 3) calibrate/recalibrate the instrument; 4) maintain the equipment; 5) repair the equipment; 6) replace the equipment; 7)add a second, third, etc. unit(s), as needed.
These are simple, yet important differences in cultures, whether simply different locations within a country or in another country. At that point, the specifics of an assay pale in the face of whether that assay actually measures anything significant to the making of the dosage form. It then behooves us to design the tests such that they measure the correct properties—critical process parameters as per ICH Q-9 & 10—in the “home” manufacturing site.
As companies embrace PAT, then QbD, then continuous manufacturing, they have no choice but to learn more about the properties of both the raw materials and intermediates in a process stream. Since we need to deal with the chemical (moisture, residual solvents) and physical (particle size distribution, polymorphic form, crystallinity) properties in order to design and apply the design space for a particular product formulation. As more is understood about what are the correct parameters we need to measure/control, the better we may design the tests we use to accept or reject an API or intermediate.
Once we gain a handle on what measurements we need, we can design both test procedures and proper instrumentation. As was mentioned in my last column, analytical instrument companies are striving to manufacture monitors that are reproducible and, hopefully, easily transferred from site to site with minimum re-validation effort. Even before the low cost, nearly disposable units become a reality, there are existing units that could be calibrated and transferred from the originator site to the remote (second, third, etc.) or CMO site.
The obvious conclusion to a story where the ingredients change constantly, yet we are expected to generate the same end product, time-after-time, is to have a variable, controlled process stream. Clearly, this was what the FDA had in mind, initially with the PAT Guidance, and later with the stream of FDA, EMA, and ICH documents, alluding to process controlled products.
Step one is simply to begin to understand what the size, shape, polymorphic form, surface “stickiness” and such of a raw material or intermediate product has on the particular process step being studied. Since raw materials are, by nature, out of our total control—we can only specify purity and identity—we need to: 1) Measure, measure, measure. Until we know what are the critical parameters affecting our process, we need to look at everything; 2) Know which monitor/test is meaningful and which can be discarded; 3) Actually monitor our processes and begin to take control through a well-thought out QbD program.
In short, it is not enough to merely transfer methods, worked out on our raw materials and intermediates from our hardware. We need to design methods that tell us if any raw materials and intermediates form any hardware can give us a desired product. Then, and only then, will we feel confident that the tablet from the “home base” has the same efficacy as those in every CMO or company location worldwide.
Emil W. Ciurczak
DoraMaxx Consulting
Emil W. Ciurczak has worked in the pharmaceutical industry since 1970 for companies that include Ciba-Geigy, Sandoz, Berlex, Merck, and Purdue Pharma, where he specialized in performing method development on most types of analytical equipment. In 1983, he introduced NIR spectroscopy to pharmaceutical applications, and is generally credited as one of the first to use process analytical technologies (PAT) in drug manufacturing and development.
Let’s examine a very simple, specific example of why proper tests/chemistry matter. Assume a manufacturing site with a single supplier of the most commonly used excipient: lactose. Commonly used tests (e.g., USP, EP, and JP) are designed to verify that the material is the chemical entity “lactose” and has no metal or other toxic interferences, which should be sufficient, if the company has a good history with that one supplier. However, when expanding to multiple suppliers and multiple sites, these tests may have trouble categorizing the true identity of the type of lactose purchased—anhydrous, hydrous, free flowing, spray-dried, and freeze-dried—let alone the “processability” of the material.
Typical compendial tests include polarimetry to demonstrate a negative bend of the polarized light although the way it is written, there could be errors, a sieve analysis to show that “less than 1% is retained on a 100 mesh screen,” heating with a copper salt—a red color means the material is a “reducing sugar”—and a quick mid-range infrared spectrum. None of these show exactly which grade you have nor how well the material will behave in the manufacturing process.
One specific problem is that the USP allows up to four percent moisture in anhydrous lactose. So, depending on the sampling technique and time of year (humid or not) and storage conditions, a given bag of lactose can either pass or fail as “anhydrous.” Indeed, the tests, themselves, may contain large enough errors to call the results into question. A loss on drying result of 3.9% + 0.2% water makes the decision of passing or failing a judgement call. And, if one of the major excipients is allowed to have up to a four percent error—pure anhydrous lactose versus 96% lactose / 4% water—are the operating instructions specific enough or even include calculations to account for the excess moisture that will be lost in the subsequent drying step? Has anyone in the company even considered that there is a difference from month-to-month in their major excipient?
These are hardly moot questions. One simple example and true story will suffice to show how important specific, detailed instructions are to a successful analysis. I worked at a company where the SOP for an HPLC method simply stated “prepare a 50% methanol/water mobile phase.” It goes on to include filtering and degassing, of course, but that first part was, unfortunately, ambiguous. As a consequence, there were variable results for the same lots of drugs. When these problems happened, the operators simply claimed it was due to columns, pumps, injectors, etc. So, I investigated the problem in depth.
After observing a number of techs performing the method, I noticed three distinct interpretations of the supposedly straightforward prep instructions: 1) Some techs measured 500 mL of water into a one liter volumetric flask and brought the mixture to volume with methanol; 2) Some measured 500 mL of methanol into the liter flask and brought to volume with water; 3) And yet a few (the better trained techs) measured 500 mL of water into the flask, then measured 500 mL of methanol into the same flask, then mixed.
Thus, I prepared three distinctly different solutions that could, under various circumstances, be labelled “50% Methanol.” It does not take a physical chemist to understand that a water-methanol solution both warms and shrinks, simultaneously. Bringing it “to volume” will give about a five percent error; the direction of the error, of course, depends on the solvent with which you adjust the volume. This will, of course, give very different HPLC retention times and separations. All of which were blamed on the pumps, columns, or injectors, just not the people making the solutions. A simple, detailed SOP was written (spoiler alert: #3) so all techs followed the same procedure: problem fixed.
The actual physical plant of the overseas lab can also play a large part. A number of years back, we had developed a number of gas chromatographic methods on what was, at that time, a state-of-the-art instrument, equipped with on-board controller microchip. The plant in Germany didn’t find it acceptable and wanted to work with a different vendor. While this is not as critical as a NIR or HPLC unit, we still wanted to know what the problems were.
After contacting our sales rep from the instrument company, it was determined what happened: the German company was so large that it had to generate its own power to supplement the city-provided electricity. Turns out that the grid had a “floating” ground while the instrument was conventionally grounded, meaning it wasn’t actually grounded. Every time one of the larger production units was turned on or off, there was a surge, which wiped the instrument’s memory. This was before universal power supplies were ubiquitous.
Here also, being a physical chemist helped. We had them build a simple Faraday cage around the instrument—far easier than rewiring the whole building—and, voila, the instrument performed as stated by the manufacturer and they purchased four more immediately.
What is needed is, quite simply, the same due diligence that is used when inspecting a potential supplier’s facility, assuring the Compliance or QA department that cGMP’s are being observed and followed. When a method is transferred to a second facility whether wholly-owned or a CMO, the initiator needs to visit the lab or process, if an in-process PAT monitor is used, and check every detail, including how the operator will: 1) install the instrument, including performing an IQ/OQ; 2) use the instrument on actual samples—including any sample prep, making any solvents, performing calculations; 3) calibrate/recalibrate the instrument; 4) maintain the equipment; 5) repair the equipment; 6) replace the equipment; 7)add a second, third, etc. unit(s), as needed.
These are simple, yet important differences in cultures, whether simply different locations within a country or in another country. At that point, the specifics of an assay pale in the face of whether that assay actually measures anything significant to the making of the dosage form. It then behooves us to design the tests such that they measure the correct properties—critical process parameters as per ICH Q-9 & 10—in the “home” manufacturing site.
As companies embrace PAT, then QbD, then continuous manufacturing, they have no choice but to learn more about the properties of both the raw materials and intermediates in a process stream. Since we need to deal with the chemical (moisture, residual solvents) and physical (particle size distribution, polymorphic form, crystallinity) properties in order to design and apply the design space for a particular product formulation. As more is understood about what are the correct parameters we need to measure/control, the better we may design the tests we use to accept or reject an API or intermediate.
Once we gain a handle on what measurements we need, we can design both test procedures and proper instrumentation. As was mentioned in my last column, analytical instrument companies are striving to manufacture monitors that are reproducible and, hopefully, easily transferred from site to site with minimum re-validation effort. Even before the low cost, nearly disposable units become a reality, there are existing units that could be calibrated and transferred from the originator site to the remote (second, third, etc.) or CMO site.
The obvious conclusion to a story where the ingredients change constantly, yet we are expected to generate the same end product, time-after-time, is to have a variable, controlled process stream. Clearly, this was what the FDA had in mind, initially with the PAT Guidance, and later with the stream of FDA, EMA, and ICH documents, alluding to process controlled products.
Step one is simply to begin to understand what the size, shape, polymorphic form, surface “stickiness” and such of a raw material or intermediate product has on the particular process step being studied. Since raw materials are, by nature, out of our total control—we can only specify purity and identity—we need to: 1) Measure, measure, measure. Until we know what are the critical parameters affecting our process, we need to look at everything; 2) Know which monitor/test is meaningful and which can be discarded; 3) Actually monitor our processes and begin to take control through a well-thought out QbD program.
In short, it is not enough to merely transfer methods, worked out on our raw materials and intermediates from our hardware. We need to design methods that tell us if any raw materials and intermediates form any hardware can give us a desired product. Then, and only then, will we feel confident that the tablet from the “home base” has the same efficacy as those in every CMO or company location worldwide.
Emil W. Ciurczak
DoraMaxx Consulting
Emil W. Ciurczak has worked in the pharmaceutical industry since 1970 for companies that include Ciba-Geigy, Sandoz, Berlex, Merck, and Purdue Pharma, where he specialized in performing method development on most types of analytical equipment. In 1983, he introduced NIR spectroscopy to pharmaceutical applications, and is generally credited as one of the first to use process analytical technologies (PAT) in drug manufacturing and development.