Evaluation of electronic decision support systems

The National Electronic Decision Support Taskforce report "Electronic Decision Support for Australia"s Health Sector" (published January 2003, available at http://www.ahic.org.au/downloads/nedsrept.pdf) identified the need for evaluation of electronic decision support systems (EDSS). In particular the importance of promoting evaluation of the efficacy and effectiveness of electronic decision support systems as a matter of course, using rigorous and validated methodologies.

It would be difficult to propose a single evaluation methodology that meets the diverse needs of the EDSS community. Different user groups have different evaluation tasks and objectives (depending on factors such as the stage of system development, intended goals for the system).

By providing a set of evaluation guidelines this web site is the initial stage in promoting the evaluation of EDSS. Over time this initial set will be added to in what is hoped to be an evolving resource for the EDSS community.

Guideline development

The topics of these guidelines are based upon typical EDSS evaluation questions. These questions were identified during focus groups and individual interviews with those involved in the development, purchase and evaluation of EDSS. The Centre for Health Informatics, University of New South Wales, authored these guidelines based on experience, literature reviews and consultations with local and international experts in the field.

The intended audience of these guidelines are novices at evaluation of EDSS, rather than experts. The aim of the guidelines is to raise the understanding of topic areas, with pointers to useful journal references, books and web sites for those seeking more information. They are not intended to cover every aspect of each topic, but to stimulate thinking around key techniques and to foster an appreciation of the importance of evaluation.

Evaluation - an ongoing process

Øvretveit (1998) provides the following definition of evaluation:

"Evaluation is making a comparative assessment of the value of the evaluated or intervention, using systematically collected and analysed data, in order to decide how to act. Evaluation is attributing value to an intervention by gathering reliable and valid information about it in a systematic way, and by making comparisons, for the purposes of making more informed decisions or understanding causal mechanisms or general principles".

Ammenwerth et al (2004) use the concept in the following sense:

"Evaluation is the act of measuring or exploring properties of a health information system (in planning, development, implementation, or operation), the result of which informs a decision to be made concerning that system in a specific context".

While evaluation of an operational system is important, the need for evaluation during system development is also an important priority (Ammenwerth et al 2004; Brender 1998; Miller 1996; Moehr 2002). Evaluation of EDSS as an important activity is recognised as a global priority in the medical informatics community. Iterative cycles of design and evaluation at each stage in the development of an EDSS, with refinement based on the results of the evaluation, will lead to an improvement in the quality and safety of these systems.

Miller (1996) points out the need for evaluation to be part of core activities, not just when a system is developed, trialed in a lab then clinical setting or implemented, but as part of the ongoing maintenance of a system. Ideally, system evaluation should be an ongoing, strategically planned process, not a single event or a small number of episodes. Such a process would ensure that if changes are made to they system, (such as modification of a knowledge base, or an upgrade to the systems software), their impact is evaluated.

Using these guidelines

People, technologies (such as EDSS) and conversations occur as a complex "system" in a specific context in which health care occurs (Coiera, 2004). In order to design and evaluate a system, all three components of the system must be understood. To design systems that take into account social and technical influences, we must understand the interaction between these three key areas. These guidelines are intended to provide an understanding of how we can evaluate aspects of these three key areas. Each guideline provides an overview of each topic, with reference to other guidelines. Guidelines on specific evaluation techniques (such as how to conduct a focus group) are included in the resources section. A glossary of terms is also provided.

Feedback

As with any guidelines, and particularly in an emerging area such as medical informatics, it is expected that refinement of evaluation techniques will occur over time.

We welcome your feedback on this set of guidelines and suggestions for topics for future guidelines. Please send your feedback or suggestions to [email protected]

References

Ammenwerth, E., Brender, J., Nykanen, P., Prokosch, H, Rigby, M., and Talmon, J. (2004) Visions and strategies to improve evaluation of health information systems. Reflections and lessons learned based on the HIS-EVAL workshop in Innsbruk. International Journal of Medical Informatics;73, 479-491

Brender, J. (1998) Trends in assessment of IT-based solutions in healthcare and recommendations for the future. International Journal of Medical Informatics;52, 217-227

Coiera, E. (2004) Four rules for the reinvention of health care, British Medical Journal, 328, 1197-1199

Miller, R.A. (1996) Evaluating evaluations of medical diagnostic systems. Journal of the American Medical Informatics Association;3, 429-431

Moehr, J.R. (2002) Evaluation: salvation or nemesis of medical informatics? Computers in Biology and Medicine;32, 113-125

Øvretveit, J. (2000) Evaluating health interventions, London, Open University Press, pp.158-180.