Lowe Down

Beware of Pharma’s Perverse Incentives

What cannot be measured cannot be managed, but let’s measure the right things, correctly, or we’ll only encourage the counterproductive.

By: Derek Lowe

Contributing Editor

Scientific R&D is one of the biggest things that the U.S. and other industrialized countries have going for them. The engines of the economy in the old days were supplies of natural resources, cheap labor, opening of new land, or new ways of moving all goods around (think canals and railroads). But although these are still important, as the 20th century went on, an ever-larger part of the picture became the things that came out of the labs and workshops.

But just because the whole enterprise has been as successful as it has doesn’t mean that there aren’t some parts of it with real problems. Scientific progress seems like such a built-in feature of the modern world (because it made the modern world), that it’s easy to forget that it’s a very recent development in human history.

There’s nothing that says that we’ve figured out the best way to keep it going, either. In fact, there are several steps with perverse incentives (or outright moral hazards) that have been widely recognized as trouble spots, but have proven difficult to do anything about.

I define a perverse incentive as a situation in which something counterproductive is actually encouraged. These incentives can apply to both academia and industry, albeit in different ways. Unfortunately, a good number of them bear on drug discovery and development.

For example, academic research has the distortions that come with the funding process. For some time now, getting a grant (or especially getting a grant renewed) has meant publishing in prestigious journals. It’s understandable: You have to show that you’ve accomplished something, and you have to show that others think that it was worthwhile. What could demonstrate that more clearly than a high-profile publication? Better yet, a whole string of them. But that’s led to an arms race in scientific publishing, with the high-end publishers fighting to hold on to their positions (and thus making the competition to get into their pages ever more cut-throat), as all sorts of other operators jump into the game. There are, I think it’s safe to say, an insane number of scientific journals being published, and a significant driver of that expansion has been the need for people to publish whatever they can to justify their existence.

The other side of the arms race is on the side of the scientists. The perverse incentive for the really groundbreaking academic work is to get it into print as quickly as possible, and that’s led to a (seemingly endless) series of retractions, corrections, and scandals. Cutting-edge stuff is bound to be less reproducible than journeyman work, at least at first, but a lot of seriously sloppy work has been rushed out (along with some outright fraud, sad to say). Even the sound results, though, are too often salami-fied into smaller publishable units, which does no readers any favors. (The authors get a longer publication list, though, and the publishers, well, they’re fine with it, too).

A problem of a different sort is the incentive to turn out grad students, rain or shine, jobs or no jobs. There are, to be sure, fields where this has reached horrific proportions (the sorts of degrees where the main possibility of a job is teaching other degree candidates the same material). But the employment situation in chemistry and biology is rough enough that many people are starting to wonder about the coupling, if there is one, between scientific academia and the wider world.

It’s not like industry doesn’t have its own disconnects, of course. A big one is to be found among the smaller companies who are trying to come up with enough good clinical data to initiate some sort of partnership deal (or to be bought outright). The problem is the temptation to run the smallest, most perfectly groomed Phase II trial possible, in the hopes of generating attention-getting data with the money available. When you think about it, the whole point of running clinical trials under controlled, double-blinded conditions is to try to minimize the chances of these things happening. Running a “greenhoused” trial is an attempt to have it both ways. And while it’s admittedly the job of any potential partner to do due diligence on these, it would be better if there weren’t such an incentive to run them this way. Wishful thinking we will always have with us, but trolling for investors is a bit further down the scale.

Companies will even do this to themselves, which has always seemed to me to be an extremely silly way to spend time and money. I’m talking about the drives to meet various targets (for number of projects started, for number of programs recommended to the clinic, etc.) Some places take these more seriously than others. But if you’re not willing to bend on them when circumstances are forcing you to, then you’re going to get what you deserve: the satisfaction of having cheated yourself in a game of solitaire.
Pushing substandard programs over some artificial finish line is a poor way to occupy one’s days, even if one’s bonus is tied to it.
Back in the chemistry labs, here’s one that I’ve written about here before: if you’re so foolish as to spread the word that you’re counting compounds as a way to rank people, you’ll get to see the clearest example you’ll ever wish for of a perverse incentive in action.  You will indeed get compounds, plenty of compounds. They probably won’t do you much good, but if you ask for them, they will come. I hope you like sulfonamides.

Since I’ve spent my career in early-stage R&D, I won’t even get into the wrong turns later on, in marketing and promotion. But they’re well-known. You have the possible benefits of off-label promotion, weighed against the fines if it gets too blatant. And there are sales targets, just like there are development targets earlier on. They can bring on the famous attitude of  “You make your numbers or I’ll get someone who can,” and we all know what that tends to lead to. The incentives here are monetary, not scientific, and those are both a different category and a more wide-recognized one. The scientific mistakes, though, are perhaps less well known.

What, then, can be done about any of these? The short answer for many of them is also an unpopular one, and can be summarized as “Do it the hard way.”  Try to find a way to evaluate young faculty (or their grant applications) without resorting to things like “impact factors” for their journal publications. Try to find a way to evaluate chemists in a discovery department without looking at exactly how many compounds they made. And try to find a way to keep the research moving along without resorting to numerical targets.

Easier said! If there’s a common thread here, though, it might be that we have to beware of any schemes to evaluate research that depend on just counting up some simple number. Be suspicious of shortcuts. Things don’t work that way, and you’re setting yourself up for unexpected consequences if you pretend that they do. “Anything that can be measured, can be managed,”goes the old saying, but maybe that should be amended to “Anything that can be measured had better not be managed—at least, not by just measuring it and calling it a day.”  


Derek B. Lowe
Contributing Editor

Derek B. Lowe has been employed since 1989 in pharmaceutical drug discovery in several therapeutic areas. His blog, In the Pipeline, is located at www.corante.com/pipeline and is an awfully good read. He can be reached at derekb.lowe@gmail.com.

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters