New Learning’s Updates

The Test is Dead. Cope and Kalantzis Redefine ‘Mastery Learning’

Common Ground Scholar, an assess-as-you go learning platform, gives learners more agency

College of Education scholars Bill Cope and Mary Kalantzis anticipate that how students learn in the future will change dramatically due to increasingly sophisticated data collection and assessment.

So each week they gather with colleagues across campus to discuss important questions around teaching, learning, and assessment—questions such as the pragmatics of multimodal knowledge representations; peer-knowledge collaborations in computer-mediated environments; and ambitious studies surrounding artificial intelligence.

Cope and Kalantzis are leading these conversations which research collaborators ChengXiang Zhai of the Department of Computer Science; Duncan Ferguson of the College of Veterinary Medicine; Duane Searsmith, a senior software developer in the College of Education; and a range of graduate students and visiting professors from around the world.

Together they are working on the National Science Foundation-funded grant “Assessing ‘Complex Epistemic Performance’ in Online Learning Environments,” a project that has led to a range of joint publications in the domains of computer science, education, and bioscience.

Their work has built upon NSF-funded research and development that has taken place since 2009 in the Common Ground Scholar environment. CGScholar is a cutting edge, peer-to-peer “social knowledge” technology, for learning communities. According to Cope, the platform is valuable because it generates massive amounts of highly useable data for varied purposes that include student feedback. The software also records the progress of learners and offers course data that identifies areas of strength and weakness from a teaching point of view.

Cope said the latest iteration of the CGScholar Analytics tool collects information about learning processes in three domains: knowledge, focus or effort, and collaboration or help. The data are made available to students and teachers in real time, so it has a direct impact on the learning process.

Professors Cope and Kalantzis on how CGScholar works:

How could the use of CGScholar affect traditional tests?

Our slogan is “The test is dead—long live assessment,” says Kalantzis. What we mean by that is that conventional tests—small samplings of individual memory of facts and correct application of procedures, followed by blanket, retrospective judgments—are not as good at collecting data on everything students do while they’re learning, nor at giving them lots of incremental feedback as they go. The big data part of this is the number of tiny data points such as feedback in a peer interaction or an intelligent machine response. The data-collection process is also a feedback and learning process. Traditional tests, we predict, will go away because they’re not as accurate, and more importantly, they’re not very helpful to learning.

Expand on how CGScholar improves learning.

With CGScholar, we created an environment that scaffolds the learning process around responses to instructor inputs, classroom discussions, peer-reviewed projects, and knowledge and information surveys, Cope explains. All this provides continuous feedback to learners. Professor Kalantzis and I taught an eight-week course with 79 students. By the final week of the course, students had received 7,714 pieces of actionable feedback off of more than a million data points. Every one of those pieces of feedback—technically we refer to it as “formative assessment”—has improved the students’ work and added to their learning in a way that was never possible before.

How does this type of learning expand students’ agency since they can see their progress incrementally throughout the class and can better understand expectations?

Student progress is measured in an “aster plot,” which is a colorful, flower-like graphic that opens outward as students complete their work. Each “petal” is a measure of learning, and the instructor’s expectations are clearly spelled out as students hover over each one using their mouse. Kalantzis describes how the learning is organized into petals. There are three groups of petals: knowledge, which shows demonstrated learning achievements; focus, which shows the amount of effort; and help, which shows the extent of quality of peer collaborations. We might gauge from this that 80 would be an “A,” giving students more control of their outcomes than a typical class in which they get a grade at the end of the semester. Everyone starts a course with zero, and gradually, as students work through the class, they see the colored petals and their overall score grow.

How could CGScholar simultaneously achieve the Bush-era goal of “no child left behind” and the Obama-era goal of “every child succeeds”?

In my opinion, says Cope, these were brand-name slogans that rang hollow because educators never took them seriously. They never seemed believable—and indeed, they never could be believable because the statistics used in the tests at the end of a course always “normalized” learners across a curve of smart, mediocre, and dumb. So if we educators are take these words at face value, our starting point in every class has to be that if a student has been admitted, he or she can succeed. Some students may take longer or some may need help from peers and the teacher, but every learner can succeed—that’s why they were allowed into the class in the first place. So if we show them their incremental progress, all they need to do is keep working at it until they succeed. This is an old idea termed “mastery learning,” which was coined by Benjamin Bloom in the 1960s. Until now, it was hard to achieve in practice. Our argument is that with today’s learning analytics, at last it is possible.

Interview by Sal Nudo

  • Anna Everson