NCJ Number
70982
Journal
Evaluation Review Volume: 4 Issue: 4 Dated: (August 1980) Pages: 481-506
Date Published
1980
Length
26 pages
Annotation
The ways to analyze data for magnitude of effects, attribution of causality, and statistical reliability to increase the precision of evaluation are explored on the example of the Career Intern Program (CIP) for high-risk, low-income youth.
Abstract
Even though the evaluation showed the success of the CIP program, a further examination of results found that some differences between CIP students and the control group were not statistically significant, others were too substantial to be true, and still others were significant but troubling. Therefore, five questions were applied to the data, concerning the magnitude of effects, and statistical reliability. For example, it was found that the 'p' for reading achievement was only marginally significant (.05) despite the strong emphasis of the program on remedial reading and math. However, observation of students' behavior during pre and posttesting and analysis of the test results showed that at the posttesting 80 percent of the CIP students did not complete the test on time as compared with 50 percent of the control group, but almost all of the CIP students' answers were correct, while the control student's papers suggested guessing. With the correction for guessing, the CIP students' pre to post scores increased, while the control students'scores decreased. Corroborating a change in response style was the CIP student's performance on the Raven's test, a measure of basic reasoning ability, where accuracy, rather than speed counts. Examples of interpretation failure, tabular data, and eight references are included.