Although in 2001 overall student proficiency for the NCAAP was based on the number of domains that scored 3 or 4, in 2002 a total point value of 17 out of the 32 possible was needed for overall proficiency.
Two critical issues to be addressed in using the NCAAP scores as the primary dependent variable were (a) whether the data the teachers collected for the portfolios were reliable, and (b) whether the criteria the teachers set for mastery had been set low to make it easy to get a score of proficiency.
Because there were no data available on the technical quality of the NCAAP for the years of the project, a decision was made to also use these behavioral observation data as criterion validity evidence for the NCAAP score.
The final measure was used to determine if the teachers and parents considered the gains made on the IEP objectives selected for NCAAP documentation important.
Prior to inferential analyses, descriptive statistics were applied to review the differences in the NCAAP scores prior to and after intervention.
Table 3 provides the statistics for domain proficiency on the NCAAP for 2001 and 2002.
Although implementation records were not sufficiently detailed to allow a sophisticated analysis of the relationship between various degrees of teachers' implementation and student outcomes, a comparison was made of the implementation of the components (as a dichotomous variable) with students' NCAAP proficiency ratings.
The students who had negative growth rates did not score as proficient on the NCAAP.
As an indicator of the trustworthiness of NCAAP scores for students in the experimental group, percent of growth on IEP goals was used as evidence of criterion validity.
Substantially more of the students in the experimental group scored as proficient or above on the NCAAP at the end of the intervention year as compared to the prior year.
This study was conducted when the requirements, contents, and scoring methods of the NCAAP were being revised to improve its technical quality and meet NCLB requirements.