SCEDSSouth Carolina Economic Development School
Copyright 1988-2018, All rights reserved.
References in periodicals archive ?
In SCED, it is not as straightforward as in group designs to choose one universally accepted effect size measure, given that there is no consensus currently on that matter (Kratochwill et al., 2010; Smith, 2012).
We stick with NAP in our illustrations, as it is a commonly used and respected measure in the SCED context, which has also been shown to perform well in certain conditions (Manolov, Solanas, Sierra, & Evans, 2011).
However, if these benchmarks are to be used in the SCED context, methodologists may need to justify their appropriateness considering data characteristics such as likely nonnormality and serial dependence.
In that sense, following the across studies approach for 131 school psychology SCED studies, Solomon, Howard, and Stein (2015) provided interpretative benchmarks for several SCED analytical techniques on the basis of quartiles (1), obtaining Tau-U quartiles/benchmarks that are 0.2 lower than the quartiles reported by Parker et al.
When an analytical technique is developed or adapted for SCED data analysis, in certain cases, it is possible that its proponents suggest interpretative benchmarks.
Moreover, SCED researchers are used to performing this kind of analysis (Parker & Brossart, 2003) and some even use it as gold standard (Petersen-Brown et al., 2012; Wolery, Busick, Reichow, & Barton, 2010).
This in-depth knowledge is one of the strengths of SCED and should not be omitted from the assessment of intervention effectiveness.
Once such large-scale data are available, the raw scores (or the summary measure) obtained by the participant(s) in a SCED can be compared to the cut-off points.
(2011); 2) consult the effectiveness categories: we performed a preliminary study (3), following the steps for methodologists presented above, on all 38 participants from the SCED studies included in Jamieson et al.'s (2014) meta-analysis, using the NAP values they computed, and we obtained the following values for percentiles 25, 50, and 75 for the different effectiveness categories: no effect (.50, .59, .67), small effect (.75, .81, .87), moderate effect (.81, .97, 1.00), large effect (.84, .91, 1.00).
Nevertheless, despite the fact that multiple criteria plus client and context information may be not always be used when establishing effectiveness categories, at least combination of visual and statistical tools is apparently common among SCED researchers (Perdices & Tate, 2010).
This SCED included two participants with EBD and examined the effects of a schema instruction package on their problem-solving performance.
These effects are consistent with the relative effects of SCED studies implementing schema instruction with populations of students with disabilities (Rockwell, Griffin, & Jones, 2011; Xin, 2008).