ReaderBench Model 1 - shmercer/writeAlizer GitHub Wiki

ReaderBench Model 1

General Description

Model 1 has been replaced by the greatly simplified Model 2 that better handles multi-paragraph compositions. Model 2 is recommended for current use.

ReaderBench Model 1 is an ensemble (formed by averaging predicted quality scores) of the following six sub-models:

Full details of each sub-model are available in the links above.

All of these models used ReaderBench scores on 7 min narrative writing samples ("I once had a magic pencil and ...") from students in the fall, winter, and spring of Grades 2-5 (Mercer et al., 2019) to predict holistic writing quality on the samples (elo ratings calculated from paired comparisons).

More details on the sample are available in Mercer et al. (2019).

Mercer, S. H., Keller-Margulis, M. A., Faith, E. L., Reid, E. K., & Ochs, S. (2019). The potential for automated text evaluation to improve the technical adequacy of written expression curriculum-based measurement. Learning Disability Quarterly, 42, 117-128. https://doi.org/10.1177/0731948718803296

This scoring model was evaluated in the following publications:

Matta, M., Mercer, S. H., & Keller-Margulis, M. A. (2022). Evaluating validity and bias for hand-calculated and automated written expression curriculum-based measurement scores. Assessment in Education: Principles, Policy & Practice, 29, 200-218. https://doi.org/10.1080/0969594X.2022.2043240

Mercer, H. H., & Cannon, J. E. (2022). Validity of automated learning progress assessment in English written expression for students with learning difficulties. Journal for Educational Research Online, 14, 39-60. https://doi.org/10.31244/jero.2022.01.03

Keller-Margulis, M. A., Mercer, S. H., & Matta, M. (2021). Validity of automated text evaluation tools for written-expression curriculum-based measurement: A comparison study. Reading and Writing: An Interdisciplinary Journal, 34, 2461-2480. https://doi.org/10.1007/s11145-021-10153-6
link to pre-print of accepted article