Home | Welcome to Contract Pharma   
Last Updated Sunday, November 23 2014
Print

Dequantify Yourself



Are all those system metrics your friend or foe?



By Rick Piazza, Medidata



Published November 13, 2013
Related Searches: Development EDC Clinical Trial Clinical Trials
The relentless pressures to increase productivity and reduce costs, the widespread adoption of electronic systems for data gathering and management, and advances in technology have all contributed to the heightened demand for sophisticated analytics in the clinical development environment. Nearly all electronic clinical trial systems — e.g., electronic data capture (EDC), clinical trial management systems (CTMS), etc. — are deployed with standard reports, as well as the ability to add custom ones. These reports allow those involved in clinical research to review important metrics, such as enrollment status, EDC page status, data query status and payment information and various cycle times, to name a few. 

As operational management tools, reports can provide an indication of the overall health of a project or components of that project. Such analyses may focus attention to potential problem areas — outliers or trends — at the study level, but these reports seldom provide enough information to drive decisions that may have material impact at an organizational level. In short, while useful and purposeful, reports are generally more informative than actionable.

This is not to say that the reports typically generated through, for example, EDC or CTMS systems do not fill a real need for those individuals managing and monitoring clinical trials. They absolutely add value and, in fact, are a “must have” for those who depend on the information provided. However, the context in which these metrics reside often prevents them from exposing their full value and making an impact at an organizational level.

We shall explore how the maximum value of operational metrics can be unlocked. The examples discussed below aim to demonstrate how analytics can be leveraged to support important organizational goals, including efficiency, cost reduction and overall process improvement.

Making Data Exponentially More Valuable
The encouraging reality is that the operational metrics described above, delivered in standard reports and data listings, become exponentially more valuable and actionable when served up in the context of other data. The “other” data element may be different variables or metrics, or may be the same metrics but from another source or sources. The relationship of the data may be correlative or may simply show the position of one data element relative to others. 

Thinking of operational metrics in this way is not dissimilar to how clinical decisions are made and how clinical data is delivered to us every day.  For example, a single blood chemistry lab value would be of little value if it were not presented in the context of a range representative of a large population of individuals (i.e., the “reference range” for that blood chemistry parameter).  Knowing where that lab test result sits within the framework of “normal” may be a critical piece of data in the clinician’s decision-making process. 

In another example, a patient presenting with fatigue, a slow heartbeat and weak pulse will lead the physician in any number of directions in search of the cause.  However, that same information in the context of/correlated with other data — an elevated potassium level in this case — is instantly meaningful and actionable to the physician.  The same holds true for operational metrics. When enriched with supporting data, metrics become more valuable than when standing on their own.

Carefully Considering Context — Example 1

 
Within the gastrointestinal therapeutic area (TA) group at a global pharma company, the study team sees that approximately eight days elapse from the date of a subject visit to the input of data into the organization’s EDC system.  At a team meeting, Kelly, the study project manager, asks why entry can’t be done sooner after the subject visit. A clinical research associate (CRA) on the study responds, “Eight days isn’t bad; these sites are busy!” 

Later that week, Kelly runs into Aaron, another study project manager, whose study is within the same TA. In conversation, she mentions that she’s not seeing site data until eight days after the subject visit. Her colleague responds, “That’s great! In my study we usually don’t see data till 11-12 days after the subject visit. How do you get them to input data so quickly?” Kelly leaves the conversation feeling better about her own study’s performance, validating the CRA’s comments.

At the bi-weekly TA team meeting, Maggie, the TA director, kicks off the session with her routine review of each study’s status and performance metrics. “We’re doing pretty well on enrollment on the two studies we launched last month,” she begins. “I think the aggressive marketing and advertising campaign helped us.”

Maggie continues, “I noticed that in some of the ongoing trials, however, there seems to be a real lag in getting data in. We need to be proactive in retrieving data as soon after a subject visit as possible so that we can clean and query the data quickly and monitor safety closely.”

Maggie turns to Kelly and Aaron and says, “Your studies are the worst offenders at approximately eight and 12 days. What’s going on? All other studies in our TA are doing much better on this metric, and other TAs in our company don’t seem to be struggling quite as much as we are. Across the industry, the median is more like four days!”

Similar to a single isolated laboratory value, the isolated, single-study metric in the example above holds limited value until it is placed in the context of other data. When held up against Aaron’s study, Kelly was satisfied with her study’s performance.
However, in comparison to other studies in the same TA within her organization, her study’s performance was not very good.

Kelly only realized her study’s performance was poor and required improvement when Maggie informed her of the industry benchmark for the metric. As a senior manager, Maggie was armed with enough information to know there was room for improvement and that she was in a position to affect change not only within her TA, but across the organization.

Factoring in Variables — Example 2

 
William, the senior director of ClinOps at a global biotech company, relies on performance metrics to focus his attention on areas across the clinical organization that need the most improvement. Given his role, William has insight into all clinical trials across the company’s six TAs. The company has been using various technology solutions to run and manage its clinical development processes, and all teams design their own EDC studies.

Today, William decides to review the time it takes to develop an electronic case report form (eCRF) for a study. He notices some variance in the amount of time it takes to build each study within the Endocrine TA. Still, overall, the performance within this TA is comparable to the industry benchmark.

As William works his way through each TA, his findings are not as comforting. Endocrine, he discovers, is the best performing TA team. The other TAs have an individual study or two with stellar results, but performance scores in the remaining studies are all below the industry average.

When looking at study development time across all of the TAs, William is disappointed to learn his organization scored 30% above the median eCRF development time for the industry. He decides to dig deeper and shifts his attention to another metric, eCRF design complexity. William learns that eCRF designs across his organization are more complex than those of the company’s industry competitors. Could this be the reason for the extended design times? 

At this point, William doesn’t know enough to pinpoint the reason for the long design times. Many different factors could play a role in influencing the eCRF design time metric: training and experience of the design staff, review process and cycle time, volume and complexity of edit checks, re-use of eCRF elements — such as edit checks and derivations — and more.

In this scenario, the metrics alone cannot reveal the root cause of the poor performance. However, the ability to conduct multivariate analyses across a wide range of factors could prove to be useful when it comes to remediation and prevention. William, therefore, decides to continue his investigation and later propose process improvements that he could measure and track over time.

Whether large or small, most companies are generally focused on improving processes. Sometimes processes are clearly broken and easily fixed. Other times, they are hidden because a “reference range” is lacking. And at times, it isn’t a problem associated with a process that is hidden but rather an unrecognized success. Looking through multiple layers of aggregation (across a TA, an organization or the industry), one can uncover exceptional performance. Organizations can learn valuable lessons from these high performers to set the bar even higher.

The Stepping Stone to Predictive Analytics
Both of the shared examples demonstrate how a story told by a single metric may change when in the company of wider views of that metric — that is, when additional qualifiers are piled on. In the business of clinical trials, we’re accustomed to looking for correlations in clinical data, seeking out cause-and-effect relationships and exploring links not visible to the naked eye but uncovered by statistical analysis.

This same level of examination is seldom applied to operational data, though it does exist (to a limited extent). The ability to correlate multiple variables not only gives insight into what has transpired retrospectively, but is the stepping stone to predictive analytics. 

A good example of practical operational data correlations that lend themselves to useful predictive analytics is that of enrollment management. Unquestionably, study enrollment is and always has been an extremely hot topic. Great effort and resources have been dedicated to optimize the study enrollment process. Many factors are considered when planning — and then managing — enrollment and recruitment. A sampling of these include TA, phase, indication, target number of subjects, number of sites, historical site performance, claimed patient population at sites, competition for subjects, geographical location of sites, enrollment period, study duration and protocol complexity.

Today, various combinations of these variables and others are used to improve chances of enrollment success, as well as for real-time guidance on decisions. For example, in the planning stages, historical data can be used to model outcomes based on adjustments in total enrollment period, the number of sites, geography or enrollment rate.

As more data streams in from clinical systems, we will have the opportunity to glean actionable information using predictive models across many operational metrics. These models can help us answer important questions, such as:
  • Can we leverage a relationship between eCRF edit checks and data changes to determine the point of diminishing returns?
  • Is there a correlation between country and “non-enrolling sites” or between site grant and time-to-close queries?
  • Is the number of studies-per-coordinator a predictor of the volume of data entry errors? 
The Importance of High-Quality Data
Predictive analytics will increasingly give clinical trial planners and managers the tools they need to develop and run trials more efficiently than ever before possible. At the same time, it is important to remember that the ability to take full advantage of metrics, as discussed above, is predicated on the availability of high-quality data.

Care should be taken in cultivating and relying upon only top-quality metrics. Making choices based on anything less could lead to incorrect assumptions and faulty decision-making processes. To extract the full value of operational metrics, the following criteria should be met: 
  • Metrics should be objective, totally data-driven and have the ability to be standardized across studies, indications and TAs. In other words, data cannot be left to recollection, cannot be estimated and should not be open to multiple definitions.
  • The metric should be generated as a by-product of normal workflow of a system, which is the best-case scenario. 
  • The dataset must be large enough to expose significant differences, similarities and trends across the data.
  • The dataset must be diverse and broad enough to be representative of our industry in terms of sponsor type (pharmaceutical, biotech or medical device company; small, medium or large organization), as well as study characteristic (TA, phase, etc.).
  • Metrics should be timely, although the importance of timeliness can vary by specific metric.
Whether we are interested in a metric versus a benchmark of that metric or a metric relative to a different metric, the premise is the same. A data point in the context of other data tells a very different story than that data point alone. Organizations should not let a wealth of operational information slip by them unnoticed. On the contrary, they should proactively seek ways to harness the power of predictive analytics. Competitive advantage will be enhanced for those companies that implement best practices to unleash the potential of predictive analytics and, in doing so, identify metrics that help run businesses better.


Rick Piazza, Pharm.D. is vice president, Process & Validation at Medidata Solutions. He can be reached at rpiazza@mdsol.com.


blog comments powered by Disqus
Receive free Contract Pharma Direct emails
Sign up now to receive the weekly newsletter, and more!

Enter your email address:
Follow Contract Pharma On