NCJ Number
236811
Journal
Polygraph Volume: 40 Issue: 3 Dated: 2011 Pages: 172-179
Date Published
2011
Length
8 pages
Annotation
Using archival data, this study investigated the criterion accuracy of ESS scores with the USAF-MGQT format that is commonly used for multiple-facet diagnostic PDD testing and multiple-issue screening exams.
Abstract
Two inexperienced and one experienced examiners completed blind scoring tasks of an archival sample of confirmed field cases from the Department of Defense confirmed case archive. Sample cases were also scored with an automated version of the ESS and the OSS-3 computer algorithm. Overall, unweighted decision accuracy for manual ESS scores was 88.2 percent, with 18.3 percent inconclusives. Decision accuracy for the automated ESS model was 89.7 percent with 15.4 percent inconclusives. The OSS-3 computer algorithm produced 90.2 percent correct decisions, with 1.0 percent inconclusives. Pearson correlations were strong for the scores of the study participants (r = .931) between the manual ESS and automated ESS scores (r = .938). Pair-wise decision agreement was 80.4 percent, including inconclusives, and perfect when inconclusives were excluded. Pair-wise agreement was perfect for the ESS and OSS-3 models for this small-scale study. Multivariate analysis showed no significant main effects and no significant interaction effects between the mean total scores of manual and automated ESS models. The authors recommend continued interest in the USAF-MGQT format, the ESS in both manual and automated models, and the OSS-3 algorithm. (Published Abstract)