DoDPI Banner for Research Papers

Abstract

HONTS, C. R. and DEVITT, M. K. Bootstrap decision making for polygraph examinations. August 1992, Report No. DoDPI92-R-0002. Department of Defense Polygraph Institute, Ft. McClellan, AL 36205.

This study examined human numerical evaluation, discriminant analysis, and a bootstrap approach to decision making in the psychophysiological detection of deception with the control question test. The data for these analyses were obtained from the Utah Cooperative Working Group Database and consisted of 100 innocent and 100 guilty subjects of mock crime experiments. We found statistically equivalent performance for the three approaches. However, it should be noted that the human evaluators used in this study were not representative of the average polygraph examiner, and the human evaluation data reported in this study are likely to have substantially overestimated the accuracy of human numerical evaluation in the field. Taken in that context, the performance of the statistical classifiers should be viewed very favorably. In absolute terms, the bootstrap approach outperformed the other two approaches. As compared to discriminant analysis, the bootstrap has much to recommend it. It avoids the restrictive mathematical assumptions of discriminant analysis, and since it is not tied to any empirical standardization sample, the bootstrap is likely to be widely generalizable. It was concluded that statistical decision making has come of age in the detection of deception and should see universal application in the field in the near future.

Key-words: psychophysiological detection of deception (PDD), control question test (CQT), human numerical evaluation, discriminant analysis, bootstrap

Director's Foreword

Many approaches to computerized statistical analysis of polygraph charts have been explored, including discriminant analysis, logistic regression, artificial neural networks, and decision trees. In their review of the literature, Honts and Devitt found that all the statistical approaches seemed to produce about the same accuracy, and were about as accurate as numerical scoring by humans.

This study examined the effectiveness of another statistical approach known as bootstrapping, and compared it to discriminant analysis and numerical field scoring by humans. Their findings are consistent with the previous literature: they found all three methods produced the same accuracy level. Despite this finding, they advocate the immediate, widespread adoption by the Federal government of their method of computerized bootstrapping for polygraph decision-making, based on theoretical grounds. The research findings do not support that undertaking.

Michael H. Capps

Director