e-Learning Ecologies MOOC’s Updates

Recursive Feedback Concept: Computer Adaptive Testing

Virtual learning has presented the learner with the prospect to engage in the educational process from absolutely anywhere. This has also resulted in learners becoming more active in the learning process by looking for the content they want or need to learn. With the content being already available, the role of teachers in the educational process has subsequently changed; instead of delivering the content, teachers are taking one step further, for instance, by nurturing students’ cognition, providing assistance on how to apply knowledge, orienting on how to select sources, and, especially, managing interpersonal relations. This new social arrangement and the innovations in technology and in learning management systems have directed education to go beyond the cognition stage of knowledge acquiring.

As Prof. Bill also points out, one of the profound implications of the new technology-mediated environments is to transform testing and widen the range of assessment options [thereby changing the conventional orientation of assessment]. This new orientation is recursive feedback – feedback from multiple sources and perspectives (peers, self, instructors, experts) in addition to “feedback on feedback”, thus tremendously enhancing learning opportunities. This differs from, say, conventional end-term feedback, where only the memorizing ability of the learner gets tested, not the learner’s progress.

Computerized Adaptive Testing (CAT) is a concept dating back to the 1970s (Reckase, 1974), and has been functional in educational psychology ever since then. Due to progress in psychometric research and expanding competences of the technical support platform, the conceptual design and technical development of appropriate testing environments remains a topic of engineering research until today (Kröhne & Frey, 2011).

In recent years, interest has risen to use CAT in the context of online learning processes in order to adapt the difficulty level of proposed learning materials (Salcedo, Pinninghoff, & Contreras, 2005) or to perform a summative evaluation of learning outcomes (Guzmán & Conejo, 2005). It is also seen as a potentially important component in Massive Open Online Courses (MOOCs) (Meyer & Zhu, 2013).

CAT aims at executing a testing process, which adjusts to an examinee’s skill level via dynamically selecting appropriate testing items (Moosbrugger & Kelava, 2012). The difficulty of the next selected item depends on all previously answered items. The next item should be selected in a way which provides the most information regarding the currently estimated skill level (Linacre, 2000).

Items are drawn from an item pool that contains all items that can be used for testing. The items for a specific test are selected from the item pool based on the examinee’s estimated skill level (Veldkamp & Matteucci, 2013). In order to enable this selection, the items need to be calibrated. Calibration refers to the process of determining the parameters for each item that is necessary for skill estimation during testing (Krass & Williams, 2003).

Here is a short video to demonstrate what the CAT process looks like: 

https://www.youtube.com/watch?v=ZvFNwR8ABo4