7 Recommendations

The role of Evaluation in Global eHealth

By Hamish Fraser, MBChB, MRCP, MSc Moderator Emeritus | 20 Oct, 2011

Large numbers of ehealth projects have been funded by development agencies over the last decade (here the definition of eHealth includes mHealth and telehealth). We estimate that the total for the major agencies including PEPFAR, USAID, the World Bank, the Global Fund and other bilateral aid agencies is between three and five Billion dollars, though getting access to the information can be very difficult. The key questions of whether these eHealth systems function well and whether they have a beneficial impact on health is rarely answered.

In September a group 19 experts in ehealth, development and evaluation met at the Rockefeller Foundation Bellagio center in Italy to address the question of why so few evaluations are performed and how to increase the quality and quantity of evaluations. We are targeting all levels of the health systems and all stages, from new systems under development or being piloted up to major national roll-outs and long term use of systems.

The attached "call to Action" is the first deliverable from the meeting and lays out the key issues identified. It includes a list of the key action items. We feel that evaluation is critical in shaping the field of eHealth, improving the quality of systems and answering the question of why agencies should fund eHealth rather than out urgent priorities. The meeting was co-led by WHO and included Joaquin Blaya one of the GHDonline moderators, the full list is in the document.

We are looking forward to feedback and ideas for strengthening and improving this initiative from GHDonline, and will be reporting more activities on this list.

Hamish Fraser
Christopher Bailey
Garrett Mehl
Chaitali Sinha

Attached resource:
  • Call to Action on Global eHealth Evaluation (download, 83.8 KB)

    Summary: Large numbers of ehealth projects have been funded by development agencies over the last decade (here the definition of eHealth includes mHealth and telehealth). We estimate that the total for the major agencies including PEPFAR, USAID, the World Bank, the Global Fund and other bilateral aid agencies is between three and five Billion dollars, though getting access to the information can be very difficult. The key questions of whether these eHealth systems function well and whether they have a beneficial impact on health is rarely answered.

    In September a group 19 experts in ehealth, development and evaluation met at the Rockefeller Foundation Bellagio center in Italy to address the question of why so few evaluations are performed and how to increase the quality and quantity of evaluations. We are targeting all levels of the health systems and all stages, from new systems under development or being piloted up to major national roll-outs and long term use of systems.

    The attached "call to Action" is the first deliverable from the meeting and lays out the key issues identified. It includes a list of the key action items. We feel that evaluation is critical in shaping the field of eHealth, improving the quality of systems and answering the question of why agencies should fund eHealth rather than out urgent priorities. The meeting was co-led by WHO and included Joaquin Blaya one of the GHDonline moderators, the full list is in the document.

    We are looking forward to feedback and ideas for strengthening and improving this initiative from GHDonline, and will be reporting more activities on this list.

    Hamish Fraser
    Christopher Bailey
    Garrett Mehl
    Chaitali Sinha

    Source: Partners In Health - PIH

    Keywords: Conferences & Meetings, Lab Systems, Mobile Devices, OpenMRS, Pharmacy Information System

Replies

 

Richard Lester Replied at 1:06 AM, 21 Oct 2011

Although I would strongly agree that evidence is critical to guiding the way forward in eHealth, it is hard to take seriously yet another organization that claims to highlight this importance yet ignore the existing evidence. So far two large randomized clinical trials have proven that strategic low-cost cell phone text messaging (mHealth) interventions have improved HIV treatment adherence in Kenya. This is attains the highest level of evidence (level I), and one of these trials even demonstrated an improvement in a critical HIV/AIDS control outcome -suppression of viral load. Yet organization such as Rockefeller Foundation and the mHealth Alliance conveniently keep lobbying for resources for themselves to fund 'promising' pilots rather than implementing proven interventions for real health impact. In fact, the new standard of care (best practices) for HIV treatment support for comparison of future comparative studies should be weekly text messages support, as anything less would of questionable ethics.
See:
Lancet. 2010 Nov 27;376(9755):1838-45.
AIDS. 2011 Mar 27;25(6):825-34.
Neither of these two trials were even referenced in your posted article. Was a basic literature review done?

A/Prof. Terry HANNAN Moderator Replied at 1:34 AM, 21 Oct 2011

Richard, your points are very valid and confirm the statement recently (?Bill Tierney) that "we know e-health works but we need to learn how to make it work more widely". So from my perspective the Bellagio output is important and your perspectives confirm the current difficulties in accessing the existing knowledge so people can be made aware of it's existence and how it may be used. Is there a role here for individuals and groups to consider this aspect of the Bellagio initiative?

john mbithi Replied at 1:48 AM, 21 Oct 2011

I strongly agree that evidence is vital for guiding the way forward on the fight against HIV/AIDS.I am currently developing a mobile solution for reminding patients to adhere to their medication and how to take their medication in Kenya. I would really want the solution to give quality information to the those who rely on this information.Additionally i would also want the health officials to use the information to chart a way forward to enhancing patient adherence to ARV's.
I'm looking for organizations that would use this mobile solution as a pilot...if anyone has suggestions kindly let me know.

John Mbithi

Richard Lester Replied at 1:52 AM, 21 Oct 2011

The policy makers and development funders could focus on scaling up proven interventions and leave new innovations and pilots to research funding streams. This would force new 'ideas' to go through more rigorous evaluation before large scale investment, especially when these investments have competing priorities. And the investments that are made would be more likely to have health impacts.

Rodrigo Cargua Rivadeneira Replied at 10:28 AM, 21 Oct 2011

TI en Salud

Me parece muy interesante que se pueda evaluar los sistemas electrónicos de
salud pero mientras se sigue evaluando, es muy importante seguir educando a
los profesionales de la salud y de informática para que vean cuan útil es la
información de la sanidad de forma electrónica, todavía hay mucho
desconocimiento en América Latina, este tipo de evaluaciones ayudara a
optimizar recursos tanto humanos como tecnológicos, en salud se han
desarrollado muchos sistemas pero ninguno sea desarrollado para poder
integrarse en entre si cada uno tiene su propia arquitectura, un 100% de
sistema de salud electrónicos no tienen estándares para poder interoperar,
dentro de estas evaluaciones se debe poder recomendar cuan útil es manejar
estándares de interoperabilidad.
En lo que pueda aportar gustoso de hacerlo.

Steven Wanyee Macharia Replied at 7:10 AM, 22 Oct 2011

John, you may want to read the work that Rich and his group did before you go too far with your development work. Reference: Lancet. 2010 Nov 27;376(9755):1838-45.
AIDS. 2011 Mar 27;25(6):825-34.

Hamish Fraser, MBChB, MRCP, MSc Moderator Emeritus Replied at 10:11 PM, 22 Oct 2011

Hi Richard
The goal of this initial call to action is to get people thinking about the lack of evaluation studies and why that is. There will be other documents reviewing important studies and approaches. I personally presented your paper at the meeting along with the one from Pop-Eleches you list in AIDS. The problem as I am sure you would agree is that we could find only two other medium size or large RCTs of eHealth interventions in resource poor environments: a paper on improving health care workers compliance with Malaria treatment guidelines using text messages (Dejan Zurovac et al, DOI:10.1016/S0140- 6736(11)60783-6) and a paper that Joaquin Blaya, myself and colleague in Peru published last year on the impact of a TB laboratory reporting system (Int J Tuberc Lung Dis. 2010 Aug;14(8):1009-15).

So virtually all the other eHealth initiatives including EMRs, pharmacy systems, district and national level health information systems and other mobile solutions lack solid evidence bases for the formative and optimization stage and the impact studies. The four studies listed above have come out in the last year since our systematic review (Health Aff (Millwood) 2010 Feb;29(2):244-51) which is somewhat encouraging. Also we are not saying that RCTs are the only way to measure impact of eHealth interventions though they clearly have a key role, qualitative studies are important, as well as other controlled study designs as we works to assemble evidence that is both rigorous and generalizable to very heterogeneous environments.

Not wishing to detract from your important work but you would surely agree there is a great deal more to learn even about text messaging? Are your results equivalent or better than the state of the art for organizations like Partners In Health - the use of Community Healthcare Workers for directly observed therapy? Are the two approaches complementary? When should projects use each approach? How do you deal with the situation in communities like Northern Ghana with many local languages where voice has to be used (in a project by MOTECH), is that equivalent to texting or better?

One of the most important issues discussed in Italy was why so few evaluations have been done despite the very large investment in eHealth by development agencies. In part we believe that many people assume that they need to start with the most rigorous designs, rather than starting with simpler designs and building up to them. We intend that this initiative gets people to incorporate more and better evaluation in their projects, small or large, and that funder also require this so that investments are better directed and systems perform better. We are hoping to initiate a broad discussion on GHDonline on the challenges of doing evaluations and good example of how to deal with them, and welcome your contributions.
Regards

Hamish

Richard Lester Replied at 3:00 AM, 23 Oct 2011

Hi Hamish,
Thank you for your comments and I appreciate the report from Bellagio speaks to broader aspects of eHealth than health promotion interventions. I did not mean to detract from the important principles outlined within the report. The report speaks, correctly, to the need for an integrated evidence component to existing and future eHealth (and mHealth) projects. However, the additional issue remains, on the importance (and ethics) of what to do with the results when research does demonstrates positive health benefits, especially of public health importance. This is partly an issue of implementation science, which could be discussed separately. I also look forward to other opinions on your report.
Best regards,
Richard

Joaquin Blaya, PhD Moderator Replied at 3:51 PM, 26 Oct 2011

One of the things we talked about in the group was what were the
barriers to having more evaluations or why aren't there more
evaluations happening right now. An initial list that we thought up
was
1. Not enough funding for evaluations
2. Not enough expertise locally to carry them out
3. Decision makers placed a low priority on evaluations and more
importance on implementations, so when they have a choice, they invest
funds in systems and not evaluations

It'd be great to hear from people if there were other barriers that
they knew of, or what they thought of these. The objective here might
be to define who are the groups where the outcomes of this meeting
should be targeted so that we can increase the quantity and quality of
evaluations.

Joaquín
___________________________________________________________________
Gerente de Desarrollo, eHealth Systems
Research Fellow, Escuela de Medicina de Harvard
Moderador, GHDOnline.org

Alvin Marcelo, MD Replied at 4:22 PM, 26 Oct 2011

Joaquin,

One more reason for lack of evaluation studies is that those who are ripe
for evaluation refuse to do so.

eHealth projects (as in any project) tend to die unless there is strong govt
policy (ergo funding) behind it (the project receives funds even if it is
not very well executed, and even it cannot connect to outcomes). These same
not-very-well-executed-but-well-funded projects would probably refuse
evaluation because they might lose funding if fundamental problems are
discovered with their implementation.

Let's pretend an evaluation was made. When the result of the evaluation is
to overhaul the whole design, that places the decision makers in a state of
inertia/paralysis -- should they discard the old proprietary system (no data
export possible) to get a new improved one (with full data exportability)
but starting from nothing again? At that point, they probably won't risk the
change...

I think this is why evaluation should be done early to ensure that key
design decisions at the beginning are correctly made before further
investments are poured into the next phase...one of those decisions should
be full data exportability...

alvin

--
Alvin B. Marcelo, MD, FPCS www.alvinmarcelo.com
Voicemail: +1-301-534-0795) GPG 0x99CBC54C
Click here for the Master of Science in Health
Informatics<http://one.telehealth.ph:8081/NTHC/masters-of-science>

Leo Anthony Celi Replied at 11:53 PM, 26 Oct 2011

Hi,
Another problem is that evaluation is not as simple as it sounds.  Outcome studies have a long turn-around time.  More importantly, the science of case-mix adjustment is far from perfect (and can be gamed).  Process metrics are much easier to evaluate, and are valuable as long as the link between process and outcome is robust.  But the strength of that link between process and outcome likely varies across geographic locations and clinical contexts.  But perfect should not be the enemy of the good.  Demonstration of real, and not just perceived, value is necessary for an innovation to scale.  There is a finite source of funding for healthcare, and in the resource-poor setting, one would be competing with projects that have proven track record in the public health arena, e.g. immunizations, acute care.  My two centavos.
Cheers,
Leo

Raymond Besiga Replied at 2:55 AM, 27 Oct 2011

Hello Community,

MHealth is changing the way Health Workers are trained especially in my country Uganda. My colleague Allyson Krupar and I have had our proposal, "Proposed SMS Follow Up Tool for Healthcare Education", accepted to the International Telecommunications Union's Young Innovators Competition. Watch the pitch presented at the ITU Telecom World Conference 2011, Geneva, Switzerland, about how SMS can transform the way Health Workers can get training without leaving their health centres and patients behind. Our case study focus on the new WHO PMTCT guidelines. Video can be found here: http://www.youtube.com/watch?v=6sXB6g7-NXs&sns=fb

Jessica Shull Replied at 3:27 AM, 27 Oct 2011

In response to Joachim:
My experience in mHealth illustrated that certainly, funding is a major reason evaluations are not carried out more frequently. And I agree with Leo and Alvin.
Additional reasons can include:
1. Within many organizations it is permissible to evaluate their own systems, but not external programs.
2. If it is to be evaluated, it is not universally clear what a 'successful' system means, as there is not a recognized set of criteria or measurements. We attempted to address this issue at a special session of the last mHealth Summit (and afterward) but as far as I know no conclusions were drawn. There should also be discussion about the difference between evaluating an application and evaluating an intervention or program.
3. My understanding is that the resources are there, the programs willing to be evaluated are there, it's a matter of matching and allocating real funding to the willing programs.

Joaquin Blaya, PhD Moderator Replied at 11:55 AM, 2 Nov 2011

Terry Hannan asked me to add this paper from Biondich and Mamlin at AMIA in 2006 as a criteria for evaluation.

Attached resource:

Isabel Cristina Lobos Medina Replied at 10:23 AM, 4 Nov 2011

I´m very interesting about evaluation in eHealth sistems and projects...thanks for this oportunity...

A/Prof. Terry HANNAN Moderator Replied at 5:14 PM, 26 Jan 2012

Bellagio eHealth Evaluation Principles-the nine (9) recommendations posted in this document should be posted on developers and implementers walls. If you look at the occurrences of "data" (clinical) and "evaluation" in these 9 recommendations they occur in all of them. As Bill Tierney states to "improve care we have to measure what we do and you cannot do this without the appropriate accurate data". This component is often missing from e-health projects whether in developing or developed economies. Terry Hannan

Anna E. Schmaus Replied at 11:27 AM, 9 Mar 2012

An ehealth project exists in Mongolia since 2008. Many doctors from various disciplines are working with the web-based telemedicine platform CampusMedicus in Mongolia. Around 14,000 babies were screened (hips) during the last two years. Diagnoses between pathologists and surgeons are exchanged. It is used for teaching for cervcial cancer diagnosis. Now my question is, can anybody tell me how to evaluate this project in Mongolia? It is not a question of not enough traffic. It is a question of "how to do it". We will be very happy to do the evaluation and we will be happy to share the results with you.

Joaquin Blaya, PhD Moderator Replied at 4:40 PM, 10 Mar 2012

Hi Anna,
That's a really complex question because it depends on different factors of what you want to measure and how. I'm attaching here a paper about different evaluation methodologies, and I've asked other members to see if there's more material available.

Joaquín

Attached resource:

A/Prof. Terry HANNAN Moderator Replied at 6:54 PM, 10 Mar 2012

This discussion provides interesting 'food for thought'. It reminds of the the established principal (I think from Bill Tierney) "to improve care you have to be able to measure it". This is independent of the care environment(s) in which the care is delivered (developing/developed) economies. The data is best captured AT the care interface and this then provides (with the appropriate HIT infrastructure model) the data for the measurement of direct care and allied care (administration, research, government planning). This conceptual and practical model was learnt in the early years of e-Health development and was documented to be a valid method for care (health evaluation). The 'maturation' of these HIT systems in confirming that this is the most appropriate way to evalute health was shown in the full issue of the IJMI Vol 54, 1999 and by B.I. Blum on "Clinical Information Systems" (Springer Verlag, 1991). Recent documentation from the Dartmouth Hitchcock project e.g. on variation in care [James Wennberg] also provides evidence for this core principal. I also believe that this is the approach we took in the early days of AMPATH (after MMRS) in Kenya when we had very limited resources. Terry Hannan

Shelly Batra, MD Replied at 11:01 AM, 11 Mar 2012

An important reason for evaluation not being done is that the Truth will be revealed. It is easier to talk of processes rather than outcome metrics, and also better to show to donors where the money has gone. Very few Non-profits believe in transparency, and the reason is that their overheads are high, and adminstrative and development costs have far exceeded stipulated norms. But health is one area where we can have a measurable impact. Finally, one needs Randomised Controlled trials of best practices( which is what is being done in my organisation Operation ASHA, by MIT Poverty action Lab) and then best practices need to be disseminated all over the world for best impact.

Hamish Fraser, MBChB, MRCP, MSc Moderator Emeritus Replied at 10:10 PM, 11 Mar 2012

Hi Anna
It is exciting to hear this type of large project being proposed for evaluation.

There are several recommendations I would make in terms of designing evaluations for this.

Firstly you don't have to start big, evaluating any aspect of the system or it's use can be helpful. It is very important to understand what is being done, how the system is used and how well it works in terms of the design and specifications. That data can be of immediate help in improving the system. It is also valuable in designing and interpreting evaluations including impact studies, allowing the recognition of sites that use the system regularly and effectively where positive impact is most likely and visa versa.

Measuring clinical impact requires the creation or identification of indictors that are likely to enable measurement of system's benefits. Those could be accuracy of diagnosis, recommendation for treatment plans or, in larger and more advanced studies, clinical outcomes like reduced morbidity and mortality. The main issue in this process of indicator selection is careful analysis of current healthcare delivery and quality and recognition of gaps and weaknesses where telemedicine may be helpful. For example what is the accuracy of diagnosis of cervical cancer, how variable is that, which sites have the greatest problems and gaps? That requires assessing the true diagnostic accuracy (gold standard) by follow up and maybe on site clinical assessment by the experts. For impact studies RCTs are the most likely design that can help answer the question. Step wedge designs with phased rollout to sites can be an effective approach. I would look at Friedman and Wyatt for advice on study designs, potential biases and how to interpret the results.

Thirdly it is valuable to record the costs of setting up and running the systems including hardware, software, connectivity, staffing, training and in the case of telemedicine, the time and travel requirements of patients, local clinicians and distant specialists. There is a real risk in these types of projects that the pilot project works well with good will and enthusiasm and new systems, but as scale up occurs and it becomes a routine task it gets more difficult to sustain a good service.

I think it is true that many organizations are reluctant to be evaluated for fear of problems being identified, but often it is a lack of knowledge about available techniques, lack of recognition of the value of the process internally, and to some extent lack of funds that is the problem. Driving evaluation designs from the needs of local stakeholders is important. Also building in formative evaluation and monitoring of use early on can avoid problems at the beginning before they scale and become really embarrassing :- )
Regards

Hamish

This Community is Archived.

This community is no longer active as of December 2018. Thanks to those who posted here and made this information available to others visiting the site.