Ben Locwin, Healthcare Science Advisors05.06.15
It used to be that finding evidence of toxins or contamination was impossible—not only were the molecules of toxin invisible, but the miasma theory of disease pretty much assured that people weren’t looking for the right thing to identify organismic contamination. This made life a bit more dangerous—living on the edge as it were might include taking a sip of a liquid left in a cup. This lack of visibility of a dangerous condition is also what ushered in the literary era (e.g. Agatha Christie), one of the most seductive of all crime devices—poisoning. The “Queen of Crime” herself frequently resorted to this modality because it offered the plots such invisible causes of fatal consequence—what a way to keep the readers at high anxiety.
Clearly, successions of paradigm shifts in chemistry allowed for the rapid and relatively easy identification of toxins, and Pasteur’s, along with Robert Koch’s, refinement of the principles of the Germ Theory of Disease allowed us to be looking for the right causative agents, or microscopic organisms, responsible for contamination and the universal aseptic precautions that were derived therefrom. Because of the remarkable series of successes we’ve been involved in, it’s easy to draw a cheap correlation that ‘more precision is always better’ and that ‘higher-resolution measurements should always be pursued.’ This mindset is directly drawn from the classical conditioning that we’ve all experienced in scientific pursuits: More quantifiable measures are regarded as ‘better’ and fascinating imagery of smaller-and-smaller scale features in the natural world always grab the interest of audiences. While I’ve been similarly seduced by measurements and technology, it doesn’t always represent a better state of affairs, especially when the limits of detection (LOD) or limits of quantification (LOQ) that we’re dealing with are so objectively small that there is nothing in our daily human experience to prepare us with analogy as to what the results actually mean.
The Very Large to the Very Small
What follows is a thought experiment, the type of so-called ‘gedankenexperiment’ that Einstein and his contemporaries would often engage in to mind-test their hypotheses as they were working through them, and most often without equipment that could even demonstrate that the hypotheses were or were not correct! We imagine the solar system when we’re taught about it, and its eight planets, no longer, at least for the moment, nine since Pluto was demoted to dwarf planet status.
Simply from mentioning this, you have unavoidably for certain psychological reasons drawn up a mental abstraction of the solar system in your mind’s eye; it may be a ‘top-down’ view showing all the concentric orbits and paths of the planets with the sun in the middle, or some anamorphic perspective that you may have developed after being influenced by a book or poster image. Either way, it seems graspable that those planets and orbits can be corralled by our mind. But they simply cannot.
For example, Pluto is so far from our sun and us that it takes light about 6 hours of constant transit time to get there. There is no way to fathom this distance from our everyday experiences with light, how fast it moves, and distances that we’re evolved to perceive in terms of walking or running distances. For example, perhaps you could envision a half-mile walk, and expect that it would take you the time and effort of about 8 minutes to reach your destination. At that pace—4 miles per hour—it would take you, barring the considerations of a near complete vacuum, no friction, etc., about 1.2 billion hours to walk to Pluto. That’s about 133,185 years. And that’s well within our solar system.
Let’s shift to the very small now. Just about at the limit of human vision is the human egg cell, from which an embryo forms. Smaller still are other human cells of different types, reducible only by microscopy. To resolve down to individual bacteria or viruses requires high-resolution microscopes, ultimately down to electron microscopy, which uses electrons to provide the resolving power, nearing the fundamental limits of imagery. Bacteria are about 1 micrometer in size (range ~ 0.5-5 mm). There are estimates that the number of bacteria in a gram of soil is about one billion.1 Some estimates range between one hundred thousand and one billion. The number of total organisms in a gram of soil is thus roughly greater than the number of people on Earth. Viruses are about ten times smaller at around 0.1 mm (range ~ 20-400 nm). And antibodies are about ten times smaller still.
LOD and LOQ Are Not Necessarily Practical in Reality
The singular thesis here is that just because we can measure it doesn’t mean that we have any grasp on what the dimensions of those units actually mean. As analytical equipment gets more and more precise, and we can use GC-MS or ICP to measure parts-per-billion, parts-per-trillion, or even parts-per-quadrillion, we obtain results, which have the alert-inducing flair that hard, concrete numbers seem to generate. Femtogram sensitivity has been achieved. You may see a number, perhaps 64, and think, “There’s a result there.” That’s how the internal dialog goes, and the units of measure and preceding conveyor belt of zeroes following the decimal point fail to bring us back to pragmatic reality that this may be 64 ppb, ppt, ppq, or less. On one hand, our specification limits can give us an arbiter of what should matter, but what about in cases where there aren’t specifications for what has been measured?
The Dose Makes the Poison
These words were attributed to Paracelsus in around 1530 AD, and accurately reflect the toxicological reality that everything can lead to death in appropriately high doses. A quick chart that I put together here lists the lethal dose (LD50) for a few common compounds based on oral ingestion by rats, scaled to a 65 kg body mass.3
We’re clearly breathing mercury in the air outside—estimated at about 2.5 ng/m3)4—inhaling and swallowing absolutely innumerable organisms with each breath and bite of food, and in our zeal to measure everything to any irrelevant limit we can, are missing the point of why measurement is important.
There’s a reason that certain hospitals and clinics use particular laboratories to assess tissue biopsies—some analytical sites are risk-averse, and some sites are more risk-tolerant. The problem with unchecked risk aversion is that erring on the side of finding things swings the balance of false positives in the unfavorable direction—telling thousands of people that they have cancer when they actually don’t, for example, has led to unnecessary further tissue biopsies, surgeries, soft tissue scans, suicides, and untellable fear and psychological trauma. For example, Deppen et al. (2009)5 showed that out of 25,362 patients who had surgery for presumed lung cancer, 2,312 (9.1%) were benign. And the risks aren’t benign. The in-hospital mortality rate was 2.3%. The precision to measure more-and-more finely isn’t the whole answer.
Similarly, chasing down investigatory paths for a production QC result which is within these orders of magnitude of irrelevance misses the entire point of why these process inspections are in-place in the first place. It again swings the pendulum in the wrong direction—to ever entertain a discussion that a particular testing result ‘should be zero’ without actually having knowledge of what it could be in practice is absolutely unscientific. This is why we have toxicological terms such as acceptable risk. And remember, when treatments are used with patients, there is always a risk: benefit balance that is struck by the prescribing physician, because no therapy is risk-free or free from side effects or adverse events. The question is, how tolerable are the risks relative to the benefits the majority of patients will experience?
References
Ben Locwin
Healthcare Science Advisors
Ben Locwin, PhD, MBA, MS writes the Clinically Speaking column for Contract Pharma and is an author of a wide variety of scientific articles for books and magazines, as well as an acclaimed speaker. He also provides advisement to many organizations and boards for a range of healthcare, clinical, and patient concerns.
Clearly, successions of paradigm shifts in chemistry allowed for the rapid and relatively easy identification of toxins, and Pasteur’s, along with Robert Koch’s, refinement of the principles of the Germ Theory of Disease allowed us to be looking for the right causative agents, or microscopic organisms, responsible for contamination and the universal aseptic precautions that were derived therefrom. Because of the remarkable series of successes we’ve been involved in, it’s easy to draw a cheap correlation that ‘more precision is always better’ and that ‘higher-resolution measurements should always be pursued.’ This mindset is directly drawn from the classical conditioning that we’ve all experienced in scientific pursuits: More quantifiable measures are regarded as ‘better’ and fascinating imagery of smaller-and-smaller scale features in the natural world always grab the interest of audiences. While I’ve been similarly seduced by measurements and technology, it doesn’t always represent a better state of affairs, especially when the limits of detection (LOD) or limits of quantification (LOQ) that we’re dealing with are so objectively small that there is nothing in our daily human experience to prepare us with analogy as to what the results actually mean.
The Very Large to the Very Small
What follows is a thought experiment, the type of so-called ‘gedankenexperiment’ that Einstein and his contemporaries would often engage in to mind-test their hypotheses as they were working through them, and most often without equipment that could even demonstrate that the hypotheses were or were not correct! We imagine the solar system when we’re taught about it, and its eight planets, no longer, at least for the moment, nine since Pluto was demoted to dwarf planet status.
Simply from mentioning this, you have unavoidably for certain psychological reasons drawn up a mental abstraction of the solar system in your mind’s eye; it may be a ‘top-down’ view showing all the concentric orbits and paths of the planets with the sun in the middle, or some anamorphic perspective that you may have developed after being influenced by a book or poster image. Either way, it seems graspable that those planets and orbits can be corralled by our mind. But they simply cannot.
For example, Pluto is so far from our sun and us that it takes light about 6 hours of constant transit time to get there. There is no way to fathom this distance from our everyday experiences with light, how fast it moves, and distances that we’re evolved to perceive in terms of walking or running distances. For example, perhaps you could envision a half-mile walk, and expect that it would take you the time and effort of about 8 minutes to reach your destination. At that pace—4 miles per hour—it would take you, barring the considerations of a near complete vacuum, no friction, etc., about 1.2 billion hours to walk to Pluto. That’s about 133,185 years. And that’s well within our solar system.
Let’s shift to the very small now. Just about at the limit of human vision is the human egg cell, from which an embryo forms. Smaller still are other human cells of different types, reducible only by microscopy. To resolve down to individual bacteria or viruses requires high-resolution microscopes, ultimately down to electron microscopy, which uses electrons to provide the resolving power, nearing the fundamental limits of imagery. Bacteria are about 1 micrometer in size (range ~ 0.5-5 mm). There are estimates that the number of bacteria in a gram of soil is about one billion.1 Some estimates range between one hundred thousand and one billion. The number of total organisms in a gram of soil is thus roughly greater than the number of people on Earth. Viruses are about ten times smaller at around 0.1 mm (range ~ 20-400 nm). And antibodies are about ten times smaller still.
LOD and LOQ Are Not Necessarily Practical in Reality
The singular thesis here is that just because we can measure it doesn’t mean that we have any grasp on what the dimensions of those units actually mean. As analytical equipment gets more and more precise, and we can use GC-MS or ICP to measure parts-per-billion, parts-per-trillion, or even parts-per-quadrillion, we obtain results, which have the alert-inducing flair that hard, concrete numbers seem to generate. Femtogram sensitivity has been achieved. You may see a number, perhaps 64, and think, “There’s a result there.” That’s how the internal dialog goes, and the units of measure and preceding conveyor belt of zeroes following the decimal point fail to bring us back to pragmatic reality that this may be 64 ppb, ppt, ppq, or less. On one hand, our specification limits can give us an arbiter of what should matter, but what about in cases where there aren’t specifications for what has been measured?
The Dose Makes the Poison
These words were attributed to Paracelsus in around 1530 AD, and accurately reflect the toxicological reality that everything can lead to death in appropriately high doses. A quick chart that I put together here lists the lethal dose (LD50) for a few common compounds based on oral ingestion by rats, scaled to a 65 kg body mass.3
We’re clearly breathing mercury in the air outside—estimated at about 2.5 ng/m3)4—inhaling and swallowing absolutely innumerable organisms with each breath and bite of food, and in our zeal to measure everything to any irrelevant limit we can, are missing the point of why measurement is important.
There’s a reason that certain hospitals and clinics use particular laboratories to assess tissue biopsies—some analytical sites are risk-averse, and some sites are more risk-tolerant. The problem with unchecked risk aversion is that erring on the side of finding things swings the balance of false positives in the unfavorable direction—telling thousands of people that they have cancer when they actually don’t, for example, has led to unnecessary further tissue biopsies, surgeries, soft tissue scans, suicides, and untellable fear and psychological trauma. For example, Deppen et al. (2009)5 showed that out of 25,362 patients who had surgery for presumed lung cancer, 2,312 (9.1%) were benign. And the risks aren’t benign. The in-hospital mortality rate was 2.3%. The precision to measure more-and-more finely isn’t the whole answer.
Similarly, chasing down investigatory paths for a production QC result which is within these orders of magnitude of irrelevance misses the entire point of why these process inspections are in-place in the first place. It again swings the pendulum in the wrong direction—to ever entertain a discussion that a particular testing result ‘should be zero’ without actually having knowledge of what it could be in practice is absolutely unscientific. This is why we have toxicological terms such as acceptable risk. And remember, when treatments are used with patients, there is always a risk: benefit balance that is struck by the prescribing physician, because no therapy is risk-free or free from side effects or adverse events. The question is, how tolerable are the risks relative to the benefits the majority of patients will experience?
References
- Schloss, P.D., Handelsman, J. (2006). Toward a census of bacteria in soil. PLOS Computational Biology, 2(7).
- Thompson, K. (2012). Understanding the physiology of healthcare pathogens for environmental disinfection. Infection Control Today. http://www.infectioncontroltoday.com/articles/2012/02/understanding-the-physiology-of-healthcare-pathogens-for-environmental-disinfection.aspx
- Trautmann, N.M. (2001). Assessing toxic risk, pp. 3-12. http://ei.cornell.edu/teacher/pdf/ATR/ATR_Chapter1_X.pdf
- EMEP (1999) Monitoring and modelling of lead, cadmium and mercury transboundary transport in the atmosphere of Europe. Joint report of EMEP Centers: MSC-E and CCC. EMEP report 3/99, July 1999.
- Deppen, S.A. et al. (2009). Benign disease prevalence after surgical lung resection. American Association for Cancer Research. Annual Meeting Washington, DC, April 10, 2013. http://www.medpagetoday.com/MeetingCoverage/AACR/38387
Ben Locwin
Healthcare Science Advisors
Ben Locwin, PhD, MBA, MS writes the Clinically Speaking column for Contract Pharma and is an author of a wide variety of scientific articles for books and magazines, as well as an acclaimed speaker. He also provides advisement to many organizations and boards for a range of healthcare, clinical, and patient concerns.