This article presents guiding principles for technology evaluations to assist in identifying and defining key study metrics; facilitating communication within an interdisciplinary research team; and for understanding the interaction between users, technology, and information.
The approach posited here can also enable researchers to better assess factors that may facilitate or degrade the operational impact of the technology and answer fundamental questions concerning whether the technology works as intended, at what level, and cost. Evaluations are routinely conducted by government agencies and research organizations to assess the effectiveness of technology in criminal justice. Interdisciplinary research methods are salient to this effort. Technology evaluations are faced with a number of challenges including (1) the need to facilitate effective communication between social science researchers, technology specialists, and practitioners, (2) the need to better understand procedural and contextual aspects of a given technology, and (3) the need to generate findings that can be readily used for decision making and policy recommendations. Process and outcome evaluations of technology can be enhanced by integrating concepts from human factors engineering and information processing. This systemic approach, which focuses on the interaction between humans, technology, and information, enables researchers to better assess how a given technology is used in practice. Examples are drawn from complex technologies currently deployed within the criminal justice system where traditional evaluations have primarily focused on outcome metrics. Although this evidence-based approach has significant value, it is vulnerable to fully account for human and structural complexities that compose technology operations. (Publisher abstract modified)