As a senior business consultant mandated by my client and project sponsor,1 I had to evaluate the market landscape of highly specific toxicology assessment offerings, ranging from in vitro and in vivo to in silico studies on preclinical samples. In order to do so, I collaborated with a company,2 which developed several proprietary methodologies, tools and models that became a standard for the evaluation and selection of suppliers in the information technology (IT) market. We co-developed an evaluation model which helped ascertain how well preclinical contract research organization (CRO) service providers performed. This article describes how we adapted their own fact-based methodology and graphical treatment, and how we applied it to selectively evaluate specific parts of the CRO market.
The first challenge was to filter among many CRO players, most of them being very well established in general toxicology for preclinical studies, but very few being specialized in emerging offerings that are becoming increasingly attractive for biopharmaceutical companies. At first I screened marketed business insight reports to access consolidated data about the preclinical CRO market and its key players. However the information contained within these reports appeared to be mostly obsolete and incomplete, as the CRO market is evolving very quickly, together with the clients’ needs.
So I had to scan the market “manually,” starting from service providers known by my client, and extending to companies found at partnering events, on the internet and on LinkedIn. The difficulty was to find those that were proposing emerging capabilities for preclinical samples, such as:
Omics. Looking beyond genomics, transcriptomics and proteomics, for lipidomics and metabolomics capabilities;
Toxicology. Looking beyond general toxicology on rodents, for in vitro models and non-rodent in vivo models used in inhalation/respiratory toxicology;
Bioinformatics. Looking beyond clinical biostatistics for computational biology/causal biological network modeling to simulate disease pathways; and
Crowdsourcing platforms. To innovate and complement internal know-how or even platforms that offer the verification/in depth scientific review of study finding.
The second challenge was to define and apply a structured methodology to better evaluate and segment the CROs, thus allowing to compare and position offerings along a number of dimensions and criteria. The main source of information was their website and some high-level corporate brochures, where details were hard to find, or described in different ways from one CRO to another. Do they have proprietary technologies? What are their accreditations and certifications? Which companies are GLP compliant? Are they located in EU, U.S., or Asia? Do they have expertise to provide toxicology package for IND filing? Do they have bio-banking capacities?
Company information gathering
I finally identified 44 CROs focusing their service portfolio in preclinical areas of interest. The compilation of company information, grouped in different categories (Table 1), was a key step of the process in order to populate the future evaluation model for a rationale evaluation and scoring of vendors. So I leveraged multiple channels for extracting publicly available facts about these 44 companies, and collecting them in one integrated table. The channels I used were mostly company websites, company profiles on LinkedIn, downloadable commercial brochures, press releases announcing strategic partnering deals, financial reports and IP databases. In order to ensure the robustness of facts, I double-checked them across sources whenever possible. However I did not interview directly the companies or their customers.
First level of CRO selection
Together with my project partner,2 we co-defined two dimensions of CRO capabilities as follows:
- Commercial capabilities. Referred to the solidity and ability in defining and articulating the current and future strategy; and
- Technical capabilities. Referred to the delivered services and market adoption.
Only the 26 companies scoring higher than 5 for both dimensions were retained for further scoring and mapping, as they would be the best positioned to potentially provide a qualitative service suitable with the needs of my client.
Second level of CRO selection
We then defined a set of criteria (Table 2) that would ensure a deeper, reliable and fact-based evaluation of both technical and commercial capabilities of the vendors.
I segmented the 26 selected suppliers into four different clusters of pre-clinical services that would lead to four graphical representations, so that companies within one cluster were better comparable: 1. Biobanking/Omics/Bioinformatics; 2. Crowdsourcing platforms/Science Verification; 3. General Toxicology; and 4. Inhalation/Respiratory Toxicology.
Leveraging the suppliers’ company information gathered according to Table 1, I applied a score on a scale of 1 to 10 for each criteria described in Table 2 of each of the 26 suppliers. As an example, for the Offering criteria (Commercial dimension) the scores referred to the following meaning on a scale of 1 to 10:
10: Structured offering with modular approach, limited needs of customizations (available upon request);
5: Core offering with some ad-hoc customization; and
1: One size fits all with no customization.
As another example, the References criteria scores (Technical dimension) referred to:
10: Multiple successful references of worldwide major deals from top clients;
5: At least one successful reference from top client (contactable/verifiable/publicly known/repeated); and
1: New service, no recognized reference.
In addition, in order to obtain an aggregate score per each dimension, I also defined and assigned a weight to each criteria (Table 2). The aggregate scoring corresponded to the weighted average of the criteria scoring per the relative weight.
The two-dimensional maps were built for each predefined cluster of CROs (Figure 2). The graphical treatment allowed to visualize the most interesting vendors to partner with, the ones that appeared on the top right corner of the graph, to be considered as having a leadership position within the corresponding business area. The power of the model lied in distilling large volumes of company information into clear, precise, actionable insight and advice in order to formulate plans or make difficult business decisions with higher levels of confidence, even in a quickly evolving business landscape.
Robustness of the model
Scoring. I was the only person performing the scoring. Ideally, the evaluation should be replicated based on scores performed by a significant number of people.
Weighting. We performed additional analysis to assess the impact of the criteria weighting. A random number between -10% and 10% was added to each weight while constraining the sum of Technical (Commercial, respectively) weights to be 100%. The scores for the randomly shifted weights is then recomputed. By repeating this procedure 1,000 times, the probability density was estimated and represented on the graph. Most of the time, two densities were not overlapping with each other, so it could be concluded that the difference of positioning between two CROs was likely independent of the choice of the weights up to +/- 10% (Figure 3).
Availability of company information. As some company information were not readily available, some criteria may be tedious to score. In particular, Pricing Model and Sales Effectiveness were the most uncertain ones. In order to quantify the effect of the scoring for those two commercial criteria, random scores were generated using the three following rules:
1. Scores were sampled proportionally to score frequencies for all other criteria across all CROs (the probability of each score was 1: 0, 2: 0.04, 3: 0.1, 4: 0.15, 5: 0.18, 6: 0.13, 7: 0.14, 8: 0.16, 9: 0.07, 10: 0.03); 2. The generated scores for Pricing Model differed from the Offering Score by no more than four points; and 3. The generated score for Sales Effectiveness differed from the average of Marketing Effectiveness and Customer Focus by no more than three points.
The Commercial scores were then recomputed upon 5,000 repetitions of this random procedure and the range of Commercial scores was represented as a segment for each company, indicating the effect of uncertain scoring of those two criteria (Figure 4).
- Philip Morris Products S.A. (Project Sponsor);
- Gartner (Project Partner).
Anne Gimalac is a senior business consultant at Aston Life Sciences in Switzerland. She specializes in R&D innovation and diversification strategies, mainly for biopharma, nutrition science and tobacco industries.