Improving Written-Expression Curriculum-Based Measurement Feasibility with Automated Text Evaluation Programs (OS048)

  • PMD: 1
  • Skill Level: Intermediate
  • Session 6: Recent Developments in Assessment and Research
  • Live Chat:
    (Posters available on demand at any time during the convention week and for sixty days thereafter.)

Learner Objectives

This session will help participants…

  1. identify the critical characteristics of universal screening of writing assessment in data-based decision-making as outlined by the NASP Practice Model.
  2. use new and more feasible methods for the scoring of written production in the classroom setting.
  3. recognize the influence of several factors (e.g., genre, duration, grade) on the scoring model and the calculation of the performance scores.

Description

In this research study, we examined the use of automated text evaluation for the prediction of Written-Expression Curriculum-Based Measurement (WE-CBM) metrics. 145 elementary students completed nine 7-min WE-CBM tasks. Writing samples were hand-scored for WE-CBM metrics and processed through automated text evaluation programs. The results demonstrated strong criterion validity for predicted CWS and moderate for predicted CIWS scores. Researchers and practitioners will gain insights on the use of automated programs for universal screening of writing.

_____________________________

Presenter(s)

Michael Matta, University of Houston
Milena A. Keller-Margulis, University of Houston
Sterett H. Mercer, The University of British Columbia
Katherine L. Zopatti, University of Houston

Letter to Supervisor

Looking for a concise way to pitch the convention to your employer? This letter and the corresponding talking points are here to help!

Preliminary Brochure

Share our NASP 2021 preliminary brochure with your supervisor along with the above letter.