Mobile healthcare, telemedicine, telehealth, BYOD

Apps & Software News

WHO Unveils Standards for Global mHealth Projects

The 16-part document will help researchers and healthcare providers develop and implement better, more standardized studies.

By Eric Wicklund

- mHealth experts often wonder whether a successful smartphone-based public health program in Africa might translate to Asia or South America, or if a population health project in India can affect the same outcomes in the Caribbean.

Those questions could be answered more easily if future mHealth projects adhered to a 16-part checklist recently introduced by the World Health Organization.

The “mobile health evidence reporting and assessment” (mERA) checklist, unveiled in the latest edition of the British Medical Journal, seeks to standardize mHealth projects launched around the world. It was developed by the WHO’s mHealth Technical Evidence Review Group and the Johns Hopkins Global mHealth Initiative, among others.

“The mERA checklist was borne from the recognition of a lack of adequate, systematic and useful reporting of mHealth interventions and associated research studies,” WHO officials said in the BMJ article. “The tool was developed to promote clarity and completeness in reporting of research involving the use of mobile tools in healthcare, irrespective of the format or channel of such reporting.”

“Currently, many mHealth studies are descriptive, with a growing number assuming more rigorous experimental designs,” the article continues. “The mERA checklist aims to be agnostic to study design, and applied in conjunction with the existing tools that support transparent reporting of the study designs used. Adoption of the mERA checklist by journal editors and authors in a standardized manner is anticipated to improve the transparency and rigor in reporting, while highlighting issues of bias and generalizability, and ultimately temper criticisms of overenthusiastic reporting in mHealth.”

The checklist consists of:

  1. Infrastructure - the necessary infrastructure which was required to enable the operation of the mHealth program.
  2. Technology platform - the software and hardware used in the program’s implementation.
  3. Interoperability - how, if at all, the mHealth strategy connects to and interacts with national or regional Health Information Systems (HIS0 and other programs.
  4. Intervention delivery - the mode, frequency, and intensity of the mHealth intervention.
  5. Intervention content - how the content was developed, identified and customized.
  6. Usability testing – how program participants were engaged in the development of the intervention.
  7. User feedback - user feedback about the intervention or user satisfaction with the intervention.
  8. Access of individual participants - barriers or facilitators to the adoption of the intervention among study participants.
  9. Cost assessment - basic costs of the mHealth intervention.
  10. Adoption inputs/program entry - how people are informed about the program or steps taken to support adoption.
  11. Limitations for delivery at scale - expected challenges for scaling up the intervention.
  12. Contextual adaptability – how appropriate is the intervention to the context, and are there any possible adaptations.
  13. Replicability - technical and content detail to support replicability.
  14. Data security - security and confidentiality protocols.
  15. Compliance - with national guidelines or regulatory statutes.
  16. Fidelity - to what extent has the mHealth program’s adherence to the intended, original deployment plan been assessed.

The WHO first tackled the project in 2012, convening a working group to examine existing projects – some 500 mHealth studies alone had been launched in 2011, according to the World Bank – and find common ground for standards. By 2014 a list of guidelines had been drafted and was put to the test in three pilot projects.

Aside from creating a framework for future programs and studies, the guidelines might also help researchers avoid the pitfalls that have doomed a number of past projects – and which led to one of the most entertaining education sessions at the 2014 mHealth Summit.

At that event’s Global mHealth Forum, researchers were invited on stage to share their failures, and told stories of projects that failed to account for time zone differences or subtleties in translations, or that produced unexpected and unwanted results.


Join 50,000 of your peers and get the news you need delivered to your 

inbox. Sign up for our free newsletter to keep reading our articles:

Get free access to webcasts, white papers and exclusive interviews.

Our privacy policy

no, thanks

Continue to site...