In design of experiments, single-subject curriculum or single-case research design is a research design most often used in applied fields of psychology, education, and human behaviour in which the subject serves as his/her own control, rather than using another individual/group. Researchers use single-subject design because these designs are sensitive to individual organism differences vs group designs which are sensitive to averages of groups. The logic behind single subject designs is 1) Prediction, 2) Verification, and 3) Replication. The baseline data predicts behaviour by affirming the consequent. Verification refers to demonstrating that the baseline responding would have continued had no intervention been implemented. Replication occurs when a previously observed behaviour changed is reproduced.[1] There can be large numbers of subjects in a research study using single-subject design, however—because the subject serves as their own control, this is still a single-subject design.[2] These designs are used primarily to evaluate the effect of a variety of interventions in applied research.[3]
Design standards
editEffect size
editAlthough there are no standards on the specific statistics required for effect size calculation, it is best practice to include an effect size estimate.[4]
Reporting standards
editWhen reporting on findings obtained through single-subject designs, specific guidelines are used for standardization and to ensure completeness and transparency:[5]
Types of single-subject designs
editReversal design
editReversal design involves repeated measurement of behaviour in a given setting during three consecutive phases (ABA)- baseline, intervention, and return to baseline. Variations include extending the ABA design with repeated reversals (ABAB) and including multiple treatments (ABCABC). AB designs, or reversal designs with no return to baseline, are not considered experimental. Functional control cannot be determined in AB designs because there is no replication.[1]
Alternating treatments design
editAlternating treatments design (ATD) compares the effects of two or more independent variables on the dependent variable. Variations include a no-treatment control condition and a final best-treatment verification phase.[1]
Multiple baseline design
editMultiple baseline design involves simultaneous baseline measurement begins on two or more behaviours, settings, or participants. The IV is implemented on one behaviour, setting, or participant, while baseline continues for all others. Variations include the multiple probe design and delayed multiple baseline design.[1]
Changing criterion design
editChanging criterion designs are used to evaluate the effects of an IV on the gradual improvement of a behavior already in the participant's repertoire.[1]
Interpretation of data
editIn order to determine the effect of the independent variable on the dependent variable, the researcher will graph the data collected and visually inspect the differences between phases. If there is a clear distinction between baseline and intervention, and then the data returns to the same trends/level during reversal, a functional relation between the variables is inferred.[6] Sometimes, visual inspection of the data demonstrates results that statistical tests fail to find.[7][8] Features assessed during visual analysis include:[9]
- Level. The overall average (mean) of the outcome measures within a phase.
- Trend. The slope of the best-fitting straight line for the outcome measures within a phase.
- Variability. The range, variance, or standard deviation of the outcome measures about the best-fitting line.
- Immediacy of Effect. The change in level between the last three data points in one phase and the first three data points of the next.
- Overlap. The proportion of data from one phase that overlaps with data from the previous phase.
- Consistency of Data Patterns. The extent to which there is consistency in the data patterns from phases with the same conditions.
Limitations
editResearch designs are traditionally preplanned so that most of the details about to whom and when the intervention will be introduced are decided prior to the beginning of the study. However, in single-subject designs, these decisions are often made as the data are collected.[10] In addition, there are no widely agreed-upon rules for altering phases, so conflicting ideas could emerge as to how a research experiment should be conducted in single-subject design.
The major criticism of single-subject designs are:
- Carry-over effects: Results from the previous phase carry-over into the next phase.
- Order effects: The ordering (sequence) of the intervention or treatment affects what results.
- Irreversibility: In some withdrawal designs, once a change in the independent variable occurs, the dependent variable is affected. This cannot be undone by simply removing the independent variable.
- Ethical problems: Withdrawal of treatment in the withdrawal design can at times present ethical and feasibility problems.
History
editHistorically, single-subject designs have been closely tied to the experimental analysis of behavior and applied behavior analysis.[11]
See also
editReferences
edit- ^ a b c d e Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Columbus, OH: Merrill Prentice Hall.
- ^ Cooper, J.O.; Heron, T.E.; Heward, W.L. (2007). Applied Behavior Analysis (2nd ed.). Prentice Hall. ISBN 978-0-13-142113-4.
- ^ Kazdin p. 191
- ^ Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse website: https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_scd.pdf.
- ^ Tate, R. L., Perdices, M., Rosenkoetter, U., McDonald, S., Togher, L., Shadish, W., . . . Vohra, S. (2016). The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016: Explanation and elaboration. Archives of Scientific Psychology, 4(1), 10-31. doi:10.1037/arc0000027
- ^ Backman, C.L. & Harris, S.R. (1999). Case Studies, Single-Subject Research, and N of 1 Randomized Trials. Comparisons and Contrasts. American Journal of Physical Medicine & Rehabilitation, 78(2), 170–6.
- ^ Bobrovitz, C.D. & Ottenbacher, K.J. (1998). Comparison of Visual Inspection and Statistical Analysis of Single-Subject Data in Rehabilitation Research. Journal of Engineering and Applied Science, 77(2), 94–102.
- ^ Nishith, P.; Hearst, D.E.; Mueser, K.T. & Foa, E. (1995). PTSD and Major Depression: Methodological and Treatment Considerations in a Single-Case Design. Behavior Therapy, 26(2), 297–9
- ^ Horner, Robert, Carr, Edward, Halle, Jim, Mcgee, Gail, SL, Odom & Wolery, Mark. (2005). The Use of Single-Subject Research to Identify Evidence-Based Practice in Special Education. Exceptional Children. 71. 165-179. 10.1177/001440290507100203.
- ^ Kazdin, p. 284
- ^ Kazdin, p. 291
Further reading
edit- Kazdin, Alan (1982). Single-Case Research Designs. New York: Oxford University Press. ISBN 0-19-503021-4.
- Ledford, Jennifer R. & Gast, David L. (2018). Single subject research methodology in behavioral sciences: applications in special education and behavioral sciences. Routledge, 2009.