A new study suggests there is little, if any, difference in the reliability and accuracy of the computer approach to scoring student essays.
And this may be good news for those who believe essays are an essential component of state testing systems, since the cost-savings may well encourage more states to embrace the use of such test items to balance out multiple-choice questions.
"The demonstration showed conclusively that automated essay-scoring systems are fast, accurate, and cost-effective," said Tom Vander Ark, the chief executive officer of Open Education Solutions, and a co-director of the study, in a press release. (Vander Ark is also a former top education official at the Bill & Melinda Gates Foundation.)
Essential to Common Core Assessments
The study comes as two state testing consortia are working to develop new assessment systems pegged to the Common Core State Standards in reading and mathematics. In fact, the two consortia are supporting the Hewlett effort, and three PARCC states and three SMARTER Balanced states supplied student essays for the current study.
"The results demonstrated that overall, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items with equal performance for both source-based and traditional writing genre," says the study, co-authored by Mark Shermis, the dean of the University of Akron's college of education, and Ben Hammer of Kaggle, a private firm that provides a platform for predictive modeling and analytics competitions.
Via Mel Riddile