Not All Data Is Equal: How to Evaluate the Source of Your Clinical Evidence

(This is the fifth installment of a six-part series examining the role of clinical evidence in capital planning.)

It’s important to understand the objectivity, validity, comprehensiveness, and usefulness of any clinical evidence being relied upon during the capital evaluation process.

Objectivity: Papers based on research studies that have been cherry-picked to support a particular position can’t be trusted. The result could be catastrophic—for both your bottom line and your patients. Always ask: are the researchers sponsored by manufacturers, or have they worked as freelance consultants for equipment manufacturers in the past? Has every study been assessed and graded for the risk of

Validity: When it comes to clinical evidence, the adage applies: “Garbage in, garbage out.” Are the studies scientifically sound? Study designs vary in their ability to reliably support definitive conclusions. Case studies, for example, are vulnerable to selection bias and offer dramatically different qualitative evidence than a randomized controlled clinical trial.

Comprehensiveness: The recency and frequency of the clinical evidence are also important. How many in-depth articles, clinical studies, or healthcare technology assessments does the evidence source publish on an annual basis? And how recent are the papers being cited? Are these findings replicated by other researchers? Are there long-term outcome data available? For example, if “pain control” is a critical outcome, have there been placebo-controlled and blinded studies conducted? Has there been a methodical, dedicated approach on the part of your evidence supplier to analyze the clinical effectiveness of the technology under consideration? Are conclusions and recommendations based on a comprehensive assessment of all the best available, peer-reviewed clinical evidence?

Usefulness: Evidence without interpretation is of little use to capital planning teams. After the technology has been reviewed by your research provider, has it been scored for use within a relevant patient demographic? Does that score offer comparative technologies to consider? Were any patient selection criteria identified—and are there particular patient demographics for which the technology or procedure is contraindicated? Does effectiveness vary according to treatment parameters? A strategic synthesis of the data will evaluate the quality of all evidence according to your key questions and resulting outcomes, providing different scores for different applications. This gives your capital planning team the critical decision-support information needed to make an informed choice.

A Note on Comparative Effectiveness: Comparative effectiveness research includes primary clinical studies specifically designed to generate evidence about comparative outcomes. Comparative effectiveness reviews (CERs) analyze and synthesize the results of these studies with those from other relevant studies that lacked a comparative design. To be useful in your strategic capital planning process, the research should focus on real-world effectiveness, rather than efficacy and safety in a controlled environment.

To access the white paper from which this post was extracted, please visit our website.

Bill Donato, General Manager, MD Buyline

Healthcare veteran Bill Donato joined TractManager early in 2017.