- Studies published recently by JAMA Internal Medicine and JAMA Dermatology question the quality and consistency of care provided by virtual health (telemedicine) providers. Both studies include several limitations and design flaws, which should mean that the reported outcomes are not robust or reliable and should not be extrapolated to apply to all virtual health providers.
Unfortunately, these are not isolated examples.
Very few quality studies about virtual health outcomes have been published in clinical literature. That needs to change. And that change can and should begin with the virtual health industry itself.
Why did these studies fall short? Both JAMA analyses included study limitations and design flaws that affect the potential veracity, replicability and generalizability of the reported outcomes:
- Both studies relied on actors posting as patients, rather than tracking real outcomes in real patients. It would be difficult, if not impossible, to point to any studies focused on quality and consistency in a brick-and-mortar setting that evaluated the care provided to actors posing as patients, rather than actual patients. Real-world data mean real-world applicability.
- Neither study compares virtual health results to the quality of in-office care. While both studies implied that in-office care has less variability while delivering higher quality, neither actually used established benchmarks or clinical studies to support that assertion.
- Both studies focused on telemedicine companies that primarily use outsourced clinicians rather than a healthcare provider’s own clinicians. Not all virtual health services are created equal. While the companies included in these studies rely mostly on models in which the clinicians delivering care could be located anywhere in the country (or in the world, in the case of the dermatology study), more innovative models exist. Some virtual care platforms support continuity of care by providers from the patient’s community health system, with access to electronic medical records in the health system and the ability to escalate care as needed, order lab work, self-schedule a live visit and more.
- Most importantly, the telemedicine companies included in the studies rely primarily on outsourced, episodic telemedicine services, not a fully integrated, evidence-based virtual care platform. Both studies failed to include virtual health companies with technology that incorporates the systematic use of best practice-driven, adaptive patient interviews and structured clinician pathways to ensure the consistent delivery of guideline-adherent care. This type of virtual care platform is often fueled by asynchronous (or “store and forward”) technology rather than the synchronous (live) interaction used by direct-to-video virtual visits.
As with in-person care, diagnoses and treatment plans are only as good — and consistent and guideline-adherent — as the processes that are in place for clinicians to follow.
These observations are not intended as a critique of the JAMA network, nor of any other peer-reviewed journals that publish telemedicine studies. Yes, telemedicine studies in general appear to be held to lower standards than research about in-person clinical outcomes. But then very little reliable, vetted data about virtual health outcomes exists, so editors and reviewers are left with the choice of using no data at all or publishing the results of studies that lack the scientific rigor typically associated with well-respected peer-reviewed journals.
Understandably, with telemedicine poised to play a critical role in today’s emerging value-based care environment, healthcare providers, payers and stakeholders are eager for outcomes data about quality and consistency of care. However, weak data may be worse than no data at all, especially if decisions about care models are made based on that information.
A powerful solution is available to address this challenge, and it starts with the telemedicine industry itself. The industry needs to drive higher-quality studies in this area not only by promoting and expecting more reliable and robust studies, but by participating in that research.
This can and should be done by partnering with major health systems, including those in academic settings, to report and analyze virtual health data as it relates to patient outcomes and the quality and consistency of care delivery. To do that, we need to ensure that our systems are built to make outcomes reporting easier and faster. And we need to prioritize sharing the data in outlets such as clinical meetings and journal articles, knowing that the information will be vetted and scrutinized prior to acceptance.
In many ways, with the right analytics, this type of data can be more effectively tracked in a virtual care setting than in a face-to-face environment. For example, antibiotic stewardship can be monitored within a virtual health platform by evaluating whether care delivery was guideline-adherent and the rate at which individual providers prescribed antibiotics for specific conditions. It would be extremely challenging to track this same type of outcomes and prescribing data for in-person patient encounters across a health system or even within a medical practice.
There are virtual health companies that fail to provide the level of care they advertise. These companies need to be weeded out so that patients can benefit from convenient, guideline-adherent care delivered virtually, with the potential to save hundreds of millions of healthcare dollars in our emerging value-based care environment.
It’s time for the industry to join together to make everyone accountable, to continue to monitor and report real-world data, and to separate the virtual care wheat from the chaff.
Jon Pearce is the CEO and co-founder of Zipnosis.