Mobile healthcare, telemedicine, telehealth, BYOD

Apps & Software News

mHealth Lends an Ear to Speech Recognition Technology

Speech recognition tools embedded with mHealth technology are helping in clinical documentation, real-time translation and medication reconciliation.

By Eric Wicklund

- The longest journey in healthcare may be the distance from the doctor’s mouth to the medical record. And that’s where speech recognition technology may do the most good.

It’s an mHealth tool just now coming into its own, says Dr. Michael Docktor, director of clinical mobile solutions and clinical director of innovation at Boston Children’s Hospital and an instructor in pediatrics at Harvard Medical School. With all those titles after his name, Docktor is a busy man, so the less time he spends in front of a computer screen and the more time he spends in front of patients is better for both him and his patients.

“Doctors spend far too much time entering data,” he says. “And if you’re manually tied to a keyboard, you aren’t doing what you should be doing, which is sitting in front of and talking to people.”

Today’s mHealth platforms enable Docktor to speak into his smartphone or tablet, and have his words documented as part of the medical record. Back-end clinical documentation technology from Nuance helps with the translation. He’s also using a HIPAA-compliant, third-party text messaging app to connect quickly with colleagues.

Jonathon Dreyer, director of cloud and mobile solutions marketing for Nuance, says the technology has matured enough that clinicians can quickly and easily get their words into the EMR, with little to no disruption in workflow. At health systems like Florida Hospital and CHRISTUS Health, the platform is expected to save millions of dollars by reducing the time it takes to go from documentation to treatment, thus improving outcomes and boosting reimbursements.

Florida Hospital, which implemented Nuance’s CDI solutions with its Cerner EMR in 2015, anticipates saving more than $72 million over a two-year span. Among the savings: Fewer gaps in treatment caused by inefficient or inaccurate documentation, resulting in less gaps in treatment, resulting in fewer adverse clinical outcomes and a lower mortality rate.

“We focused on improving the integrity of our documentation and making sure everything was as timely, thorough, and accurate as possible,” said Jeff Hurst, the hospital senior vice president and senior finance officer, in a release supplied by Nuance. “Our physicians deliver great care and we focused conversations with them on ensuring that they get measured and rewarded for it, a process that plays an important role in a health system’s publicly-reported outcomes.”

All because a doctor can speak into a smartphone.

“There’s a consistency now when you go (from the doctor’s mouth) into the EMR and then out beyond the EMR to other platforms,” says Dreyer. “That’s the workflow doctors are looking for.”

Docktor agrees. Prior to this, he might type in his notes at a workstation after a patient encounter or dictate something into his mobile device, then have to wait a few days to have the information adapted to the EMR, then might not see it for a week or two. If there were any errors, he’d have to manually correct them, wasting more valuable time.

Speech recognition technology is also showing up in other areas of the healthcare setting. At the University of California-Davis, researchers are testing the technology as a translation tool in behavioral health settings, to be used by clinicians when they’re talking to non-English-speaking patients.

“A lot of … information will be lost if (doctors) don’t understand the same language as (patients) do,” Steven R. Chan, MD, MBA, senior resident physician at UC Davis, said during a presentation at the recent American Telemedicine Association conference in Minneapolis.

Chan says machine-learning tools embedded in language processing technology could create an on-site translation tool that’s better than translation apps, online services, live interpreters or family members pressed into duty.

“Our ultimate goal is to analyze language as data,” he said.

Another test of the technology is taking place at George Washington University in Washington D.C. Researchers there want to know if the platform can be used by emergency department staff to sort through a patient’s medication history.

“In the ED we don’t do a very good job of medication reconciliation at all,” said Neil Sikka, MD, an emergency physician at GWU, recalling times when disoriented patients would show up with shopping bags filled with medication bottles and no idea of what they’re taking or when.

Using NLP technology, he said, a clinician could conceivably recite the name of the medication into his or her mobile device and have that information synched with the patient’s medical record and medication history. 

Dig Deeper:

Does Speech Recognition Aid Clinical Documentation Improvement?

Does Mobile Health Affect Clinical Documentation Improvement?

 

X

Join 50,000 of your peers and get the news you need delivered to your 

inbox. Sign up for our free newsletter to keep reading our articles:

Get free access to webcasts, white papers and exclusive interviews.

Our privacy policy


no, thanks

Continue to site...