e-Learning Ecologies MOOC’s Updates

Optional Update #4: Automated Writing Evaluation

Automated Writing Evaluation (AWE) involves the use of computer technology to automatically evaluate and score written work. Development of this technology is “informed by educational measurement, computer science, and linguistics, as well as cognitive science and pedagogy” so that “psychometric evaluations of reliability and validity, considerations of intelligent operational systems and their functionality, and models that reflect thought processes and factors considered to be most beneficial for learners” have contributed to the development of these technologies (Cotos, 2014, p. 40).

 

Early AWE technologies evolved from automated essay scoring (AES), which “employ a suite of techniques that underpin automated analysis of writing, generally combining statistics, natural language processing (NLP), artificial intelligence (AI), and machine learning” in order to assess written responses by examining “grammar, syntactic complexity, mechanics, style, topical content, content development, deviance, and so on” (Cotos, 2014, pp. 40-1).

 

Later such technologies could grade essays based on subjects such as science, social studies, history, and business, providing feedback on “the form-related aspects of grammar, style, and mechanics” and was able “to detect plagiarism and deviance in essays” (Cotos, 2014, p. 41).

 

AWE such as the widely-used Criterion use an e-rater and critique applications to automatically score students’ work holistically on a 6-point scale, and provide instant feedback in each of five categories: grammar, organization & development, usage, mechanics, and style (Lee, 2020).

 

The benefits of this technology are that a large number of students can be assessed very quickly, and thus “allows them to revise their essay while it is still fresh in their mind” (Matsumura et al., 2020). This is attractive to teachers who want to provide formative feedback throughout the course, but do not have the time to read several drafts and give individualized feedback to all students. Teachers reported that use of this technology would enable them to increase the number of assignments that require revising across drafts, an important process in developing writing skills (Matsumura et al., 2020). Additionally, the ability of AWE technologies to detect plagiarism makes the process of identifying these issues much more efficient than if a teacher had to look up suspect pieces of text manually.

 

Teachers asked to provide feedback to the tool eRevise reported that they would like to be able to incorporate automated messages in eRevise’s feedback responses which align with state and district mandated writing standards, that the AWE be compatible with multiple source texts, with a wider selection of texts, and that it should ideally be able to assess students’ ability to synthesize information from across multiple texts (Matsumura et al., 2020). Teachers also expressed a need for the AWE to point out to students where in the essay a certain automated feedback response applies, so that the students could better understand and apply the feedback, in addition to the idea that “[p]roviding localized guidance to students also would more closely mirror how teachers comment on students’ essays, for example, circling or underlining parts of essays that need improvement….” (Matsumura et al., 2020).

 

Teachers also reported that they would prefer to “partner with AWE systems rather than fully off-load the task of commenting on students’ essays” (Matsumura et al., 2020); ideally, the teacher should have the option to add their own comments to the automated feedback, particularly if a student needs help understanding the meaning or purpose of the automated response (Matsumura et al., 2020).

 

To conclude, AWE are useful and increasingly powerful tools that can support teachers to offer more opportunities for formative feedback to their students throughout the semester than teachers would be able to provide on their own. More capabilities, such as the option to add personalized feedback from the instructor, the ability to incorporate more source texts and evaluate students’ ability to synthesize information across texts, and providing localized automated feedback within the students’ work, would improve the quality of automated formative assessment.

 

References:
Cotos, E. (2014). Automated Writing Evaluation. In: Genre-Based Automated Writing Evaluation for L2 Research Writing. Palgrave Macmillan, London. https://doi.org/10.1057/9781137333377_3

 

McNamara, D. & Kendeou, P. (2022). The early automated writing evaluation (eAWE) framework, Assessment in Education: Principles, Policy & Practice, 29:2, 150-182, DOI: 10.1080/0969594X.2022.2037509

 

Lee, Y. J. (2020). The Long-Term Effect of Automated Writing Evaluation Feedback on Writing Development. English Teaching, 75(1), 67-92, https://doi.org/10.15858/engtea.75.1.202003.67

 

Matsumura, L.C., Wang, E., Correnti, R & Litman, D. (2020, July 22). What do teachers want to see in automated writing evaluation systems? eSchool News.

https://www.eschoolnews.com/2020/07/22/what-do-teachers-want-to-see-in-automated-writing-evaluation-systems/2/

  • Fahad alHarth