Explore recent issues of Contract Pharma covering key industry trends.
Read the full digital version of our magazine online.
Stay informed! Subscribe to Contract Pharma for industry news and analysis.
Get the latest updates and breaking news from the pharmaceutical and biopharmaceutical industry.
Discover the newest partnerships and collaborations within the pharma sector.
Keep track of key executive moves and promotions in the pharma and biopharma industry.
Updates on the latest clinical trials and regulatory filings.
Stay informed with the latest financial reports and updates in the pharma industry.
Expert Q&A sessions addressing crucial topics in the pharmaceutical and biopharmaceutical world.
In-depth articles and features covering critical industry developments.
Access exclusive industry insights, interviews, and in-depth analysis.
Insights and analysis from industry experts on current pharma issues.
A one-on-one video interview between our editorial teams and industry leaders.
Listen to expert discussions and interviews in pharma and biopharma.
A detailed look at the leading US players in the global pharmaceutical and BioPharmaceutical industry.
Browse companies involved in pharmaceutical manufacturing and services.
Comprehensive company profiles featuring overviews, key statistics, services, and contact details.
A comprehensive glossary of terms used in the pharmaceutical and biopharmaceutical industry.
Watch in-depth videos featuring industry insights and developments.
Download in-depth eBooks covering various aspects of the pharma industry.
Access detailed whitepapers offering analysis on industry topics.
View and download brochures from companies in the pharmaceutical sector.
Explore content sponsored by industry leaders, providing valuable insights.
Stay updated with the latest press releases from pharma and biopharma companies.
Explore top companies showcasing innovative pharma solutions.
Meet the leaders driving innovation and collaboration.
Engage with sessions and panels on pharma’s key trends.
Hear from experts shaping the pharmaceutical industry.
Join online webinars discussing critical industry topics and trends.
A comprehensive calendar of key industry events around the globe.
Live coverage and updates from major pharma and biopharma shows.
Find advertising opportunities to reach your target audience with Contract Pharma.
Review the editorial standards and guidelines for content published on our site.
Understand how Contract Pharma handles your personal data.
View the terms and conditions for using the Contract Pharma website.
What are you searching for?
March 7, 2014
By: Derek Lowe
Contributing Editor
Let’s take inventory. The FDA approved 37 drugs in 2012, which made everyone happy, since it was the highest total in years. But then in 2013, the agency only approved 27, which occasioned quite a few columns and blog posts wondering if 2012 was an anomaly. Did that year’s total represent some compounds backed up in the queue that all came out in a bunch? Was 2013 an example of good old regression to the mean? Or, taking the optimistic view, was it the outlier in what was still going to be a new and better world? I’m tempted to declare all of that speculation to be wasted electrons, because here’s the problem: our sample size is just too small. You can definitely try to spot trends in a few decades’ worth of approval data — although good luck with that, because the criteria for approval have certainly changed over the years, so you’re sort of comparing apples to pomegranates. At least there are enough numbers to try to draw some conclusions, even if they don’t mean as much as we’d like them to. But a single year’s total? No chance. If we’re lucky, in another 10 or 20 years we’ll be able to put 2012 and 2013 into some sort of context, but we just don’t have the ability yet. We’ll know more about what these numbers mean about the time they cease to be able to do us any good. Imagine how much worse it is to try to draw conclusions from approval numbers inside a single company, where the data set is that much smaller. This problem can be generalized, unfortunately. For an industry that lives and dies on reproducible, meaningful data, we end up making a lot of (non-clinical) decisions based on some pretty paltry numbers. Look at the situation inside any individual company, when someone comes along and reorganizes the place. The re-org is supposed to improve productivity — well, you’d hope that’s what it’s supposed to do, otherwise you get uncomfortable visions of the pilots in that old “Far Side” strip, announcing turbulence ahead while they jerk the airplane around and high-five each other. How do you measure whether the new company structure has helped? It’s for sure that the next year’s worth of projects were already in the hopper, in some form, before the re-org was announced. The year after that will show effects of the old regime as well, and those will probably extend even a bit further. How long do you have to go before everything you see is a result of the new system? If you have a figure in mind for that, then how does that number compare with the number of years the new system is actually going to be in place? Someone else will, at some point, have a bright idea of how to rearrange things, or someone higher up evaluates the new system anyway, ready or not. My guess is that a significant number of these re-org ideas never really get completely underway, and that’s only counting the ones where a real change took place (as opposed to the ones where a bunch of posters go up, training meetings are attended, but everything sort of ends up back the way it started). There’s an even larger problem to consider. Every organization is unique, given the history behind it, its institutional memory, and the people that are staffing it. So in that sense, every change that’s made is going to be an N of 1, something that might have come out differently in a different time or place. Now, in the labs we’d avoid collecting data under these conditions, wouldn’t we? The cell assays had better be within range of the last runs, or something is assumed to have gone wrong with the cells. The in vivo studies should repeat, or there’s big trouble. If you do two Phase II trials on the same population, they should both work, and Phase III had better recapitulate the efficacy seen in Phase II. But we can’t mess with our own organizations this way — there’s not enough time and money in the world to do it. Here’s a thought experiment: imagine if a drug company announced that they were going to split their R&D organization into two parts, run on different lines with a different organizational structure, but with similar numbers of chemist, biologists, and other staff. And what’s more, both of these new divisions would work on the same targets and the same drugs, just to find out if one setup worked better than another. It’s a weird dream, but that’s what you’d have to do to really figure any of this out. Don’t look for it any time soon. Maybe, though, this experiment actually has been run a few times under rather less controlled conditions. After all, companies large and small do end up working on the same targets, more or less simultaneously. And each of them has their own style, their own criteria, and their own culture. In medicinal chemistry, we often try to compare “matched molecular pairs” to see what the effects of single changes are. Has anyone tried to go back over the history of drug development to compare “matched targets”? The problem is that you’d have to know what the real workings of each company might have been at the time, which might be impossible. Companies themselves don’t always understand how they actually work, as opposed to what it says on the org chart. If we found that Company X actually outperformed Company Y on a given target, we’d still be faced with explaining why that happened. And we’d still be faced with the same small data set problem that I mentioned before. So this leaves us in a tough position when it comes to judging the big CEO-level decisions. There aren’t enough data points; there probably never will be. There are lower-level things that can actually be measured, but they’re subject to well-known observer effects. For example, let it be known in the med-chem labs that you’re counting compounds, and lo, compounds will appear. You may not find them appealing, but by gosh they’ll be there for you. The same thing will happen if you get hard-core about, say, the number of clinical candidates nominated. Those aren’t necessarily the ones you’re actually spending money to develop, mind you, and the wider the split between those numbers, the more worried you should be. But if you say that you’re going to nominate X number of compounds per year, then yeah, you probably will, for all the good that will do you. But the number of drugs you get on the market, that’s a figure, small though it may be, that cannot be manure-ified. So let me advance a hypothesis: there is an inverse relationship between how easy it is to get drug discovery metrics and how important they are. I wish that weren’t true, but I’m afraid that it is.
Enter your account email.
A verification code was sent to your email, Enter the 6-digit code sent to your mail.
Didn't get the code? Check your spam folder or resend code
Set a new password for signing in and accessing your data.
Your Password has been Updated !