e-Learning Ecologies MOOC’s Updates

Automated writing evaluations (AWE)

Automated evaluation of writing is defined as ‘the ability of computer technology to evaluate and score written prose’ (Cotos, 2014). It is a computer-based assessment of essays that can give automatic scoring. The technology combines various automated techniques such as statistics, natural language processing (NLP), artificial intelligence (AI), natural language processing and latent semantic analysis, and machine learning to assess an online written response. The assessment is evaluated based on grammar, syntactic complexity, mechanics, style, topical content, content development, deviance, and so on. These are usually developed as software. These types of software were developed as early as 1966.

Many AWE systems also incorporate written feedback on various aspects of writing, thus they are able to give prompt and customized tutorial feedback to the user on aspects of grammar, style, and content and give a score. They are also able to detect plagiarism and deviance in essays.

However, most AWE systems model only a relatively small part of the writing construct, many are concerned with structure (e.g., topic sentences and paragraph transitions); phrasing (e.g., vocabulary and sentence length); and transcribing (e.g., spelling and mechanics (Stevension, 2016). AWE was originally developed to generate summative scores for assessment purposes, and is currently being used, in combination in human evaluation, in high-stakes tests such as the Test of English as a Foreign Language (TOEFL)and the Graduate Management Admissions Test (GMAT).

However, the use of AWE feedback as an instructional tool in writing classrooms is increasing, especially in school and college classrooms in the United States. Students in the 21st century require instant feedback on their artefact. Indeed, recursive feedback is essential has been emphasized in this MOOC (Cope & Kalantzis, 2017). AWE will be able to provide students with feedback on which they can continue to improve their work until it becomes as good as expected.

There are some controversies around the use of AWE, particularly its use in high stakes testing situations. This controversy has centred around doubts concerning the accuracy of scoring and feedback capabilities, and fears concerning the effects of writing for a non-human audience. However, a recent study has shown that AWE corrective feedback seemed to encourage students to engage in writing practices (Li et al 2015). They found that AWE corrective feedback seemed to be helpful for English language students to improve linguistic accuracy, but the instructor still plays an important role in the teaching.

Several books have been written on this topic. The URL are available here:

https://link.springer.com/chapter/10.1057%2F9781137333377_3#citeas

https://www.igi-global.com/article/learner-fit-in-scaling-up-automated-writing-evaluation/86064

References:

1. Cotos E. (2014) Automated Writing Evaluation. In: Genre-Based Automated Writing Evaluation for L2 Research Writing. Palgrave Macmillan, London.

DOI: https://doi.org/10.1057/9781137333377_3

1. Stevension M. (2016). A Critical Interpretative Synthesis: The Integration of Automated Writing Evaluation into Classroom Writing Instruction. Computers and Composition 42 (2016) 1–16. Available online at www.sciencedirect.com

2. Li J, Link S & Hegelheimer V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing 27 (2015) 1–18. Available online at www.sciencedirect.com