New Learning’s Updates
Scholar’s New Analytics App: Towards Mastery Learning
We are pleased to announce the launch of Scholar’s new Analytics area, created by our research team at the University of Illinois, with the support of funding from the National Science Foundation.[1] This post is a brief conceptual orientation to the tool, now online at http://cgscholar.com
The Theory and Practice of Mastery Learning
Nearly 50 years ago, renowned educator Benjamin Bloom suggested an approach to teaching and learning that he called “Mastery Learning” (Bloom 1968). His underlying proposition was that when students are normally distributed by aptitude and they experience uniform teaching, achievement will also be normally distributed. However, “given sufficient time (and appropriate types of help) 95 percent of students … can learn a subject up to a high level of mastery, … [but] it will take some students more effort, time, and help to achieve this level than it will others.” (p.4). Fig 1. illustrates the contrast between classical uniform instruction and the hypothesized outcomes of mastery learning (Block 1971: 6-7).
This idea today underlies the idea of standards-based approaches to teaching and learning, where it is assumed that most students at a particular grade level have the potential to reach the prescribed standard (Kalantzis and Cope 2012: chapter 10).
Bloom made a number of suggestions for optimal instruction, including: regular, formative assessment that directs teachers to intervene in the case of students who are falling behind, allowing students to work at their own pace, group work, and intensive tutoring. However, these approaches have consistently been frustrated by the logistical challenges of differentiating instruction in traditional classrooms and the continued emphasis on summative rather than formative assessment.
A new generation of technologies grouped under the concept of “learning analytics” (Behrens and DiCerbo 2013, Cope and Kalantzis 2015, Cope and Kalantzis 2016, Cope and Kalantzis 2017, Siemens and Baker 2013) may at last make Bloom’s aspirations a practical possibility, offering integral formative assessment that supports and tracks progress toward mastery, providing structured supports for group or peer-to-peer learning, and making it easier for the teacher to manage the logistics of differentiated instruction. Building on research at the University of Illinois in the area of learning analytics, this is what we intend now offer in the Scholar: Analytics tool.
Scholar: Analytics is a learning visualization tool, where in any unit of work a student can see their progress towards mastery, and a teacher can see the comparative progress towards mastery of all members of the class, identifying which students may require more time or special attention to achieve mastery.
The intended outcomes of Scholar: Analytics are:
- To increase learner responsibility for learning progress and growth of self-efficacy by making learning expectations explicit, along with an always-available data visualization of progress towards meeting these expectations.
- To support a range of types of continuous formative assessment: item-based assessment; rubric-based reviews (peers, self, teacher); online discussion contributions etc.
- To support adaptive and personalized instruction, by offering the possibility of re-taking quizzes, revising work, extending contributions to online discussions etc., until mastery is attained.
- To improve on-time mastery with clear time objectives and “focus” credits which indicate the degree of effort so far expended by the student, compared to the degree of effort expected.
- To encourage via “help” credits, collaborative or peer learning, originally expressed by Bloom et al. as group discussions and peer tutoring.
- To support teachers by providing them clear visualizations of student activity, per student, as well as class progress towards mastery. It is possible to “drill down” into all the constituent datapoints. Because the basis of the analysis is a very large number of datapoints for every student, at any moment in time during a unit of work, teachers is able to see a detailed progress record for every learner, based on data that was in previously, in a practical sense, largely invisible.
- To support differentiated instruction, whereby learners can work at their own pace towards mastery goals.
- To provide an extensive supporting evidence for summative assessments made by the teacher in the gradebook. This reduces the grading burden for teachers by providing comprehensive learning progress data.
Scholar: Analytics in Practice
To illustrate how Scholar: Analytics works, consider the following a scenario using a sample learning module, Energy Transformations, published in Scholar. This module aligns with the Next Generation Science Standard “PS3.B: Conservation of Energy and Energy Transfer” and the Common Core State Standard for Writing “W.8.2: Write informative/explanatory texts to examine a topic and convey ideas, concepts, and information through the selection, organization, and analysis of relevant content.” In this scenario, the task expectations in the unit have been clearly laid out by the teacher:
- Respond to posts: The teacher will make 6 posts introducing the basic science concepts including text, videos, and diagrams each of which prompts online and in-class discussion. Every student must respond online to the comment prompt.
- Make posts of your own: Students will make 2 posts of their own, illustrating energy transformations in their everyday lives. Every student will be expected to comment on at least three other students’ posts.
- Knowledge quizzes: Students will take 2 knowledge surveys when requested; they may take them again at a later point to improve their score until they achieve a score of 80%.
- Science Report: Students choose an energy transformation topic, such as a bow and arrow, a roller coaster, a windmill or a space ship. They draft their energy transformation project.
- Peer Feedback: Students offer peer review three other students’ energy transformation projects against a rubric based on the NGSS and CCSS standards, and also make annotations.
- Share Report: Students revise their energy transformation projects and write a self-review. When published, they read others’ reports and discuss.
As the students work, they will each see their progress towards mastery in an aster plot (Fig. 2):
Our focal learner in this scenario, Tarryn, is be able to view her learning visualization at all times during the unit of work. The central image she sees is an “aster plot,” where each petal of the metaphorical flower represents one kind of learner activity. At the start of the unit, the plot is a white circle, but as Tarryn progresses towards mastery, the colored petals in the aster plot grow using data continuously mined by the software during her learning activities. In this example, she is part way through the Energy Transformations unit of work. This is how she interprets the visualization:
- The width of the petal is the weighting given by the teacher to this aspect of the work. The length of the petal is the amount of achievement of the learner to this point. (In this indicative image, we have not keyed all the petals.) In this case, Tarryn has responded to all 6 teacher posts but still has more work to do in other areas. For instance, she could take the knowledge surveys again in order to achieve a higher score in that petal.
- θ is progress towards mastery. 100 learning credits represents mastery. (We want to use the positive concept of earning “credits,” to get away from the frequently negative and judgmental notion of a “mark” or a “score.”) All students can increase their learning credits and achieve mastery by doing more work, for instance revising their projects, adding more comments to the class discussions or re-taking the knowledge surveys. The teacher may set a target minimum mastery level, such as 75%. Tarryn has not yet achieved target minimum mastery level because she is only part way through this unit of work.
- Each petal in the aster plot is active, so clicking on it brings Tarryn to more detailed information about the data used to generate the conclusion about learning represented by each petal. Tarryn is able to navigate all the way down to the work itself—an incorrectly answered quiz item, or a low score against a criterion in a rubric. This guides her to places where more work is required.
- The visualization is divided into three major segments, each labelled by an imperative verb and a symbol representing that variable: φ Focus! … or Bloom’s concept of “perseverance,” measuring variables such as time on task and amount of work produced. ε Know! … which measures knowledge via data elements such as quizzes or knowledge surveys and peer review ratings against rubrics. β Help! … which measures community contributions and collaborations, such as the extent and quality of comments on others’ posts and peer reviews.
- This page also serves as a gradebook, where the teacher can enter a grade based on their overall judgment of the quality of student work, supported by Scholar: Analytic’s progress data and calculation of learning credits. In Tarryn’s case, her work is still incomplete, and the teacher has not yet entered a grade.
Before the unit of work commences, the teacher has specified: a) The timeframe for the task. b) Task expectations. c)The relative weighting of each task.
During the unit of work, the teacher is able to access each student’s progress visualization in the same view as Tarryn’s, including the capacity to dig deep into areas requiring additional attention by an individual learner. This makes visible deficiencies which might otherwise pass unnoticed by the teacher.
Making Mastery Learning Practicable
Twenty years after Bloom and colleagues’ proposal, Kulik et al. were able to conduct a meta-analysis of 108 controlled studies of mastery learning programs. Their conclusion was that “mastery learning programs have positive effects on the examination performance of students in colleges, high schools, and the upper grades of elementary schools. The effects appear to be stronger on the weaker students in the class … Mastery programs have positive effects on student attitudes towards course content but may increase student time on instructional tasks.” (Kulik, Kulik and Bangert-Drowns 1990) These results were achieved before the widespread introduction of computers into educational settings.
Notwithstanding this research evidence, there has not been widespread adoption of mastery learning approaches simply because of the logistical complexity of what Bloom termed “optimal instruction.” Simply put, uniform instruction is easier to implement.
However, in the era of networked and “cloud” computing where every student has a device, instruction oriented to mastery learning becomes as easy to implement as uniform instruction. This is our vision with Scholar: Analytics. It is our contention that the main possibility opened by computer-mediated teaching in learning is to make logistically feasible long-held educational aspirations that until now were unrealistically ambitious (Cope and Kalantzis 2017). Of course, there is a professional learning curve for teachers along which they need to change their pedagogical practices. However, with the introduction of a panoply of new tools in schools where computers are extensively used, many teachers have already travelled a long away along this learning curve.
[1] Assessing “Complex Epistemic Performance” in Online Learning Environments, National Science Foundation, Award 1629161, 2016-2018.
References
Behrens, John T. and Kristen E. DiCerbo. 2013. "Technological Implications for Assessment Ecosystems." Pp. 101-22 in The Gordon Commission on the Future of Assessment in Education: Technical Report, edited by E. W. Gordon. Princeton NJ: The Gordon Commission.
Block, James H., ed. 1971. Mastery Learning: Theory and Practice. New York: Holt Rinehart & Winston.
Bloom, Benjamin S. 1968. "Learning for Mastery." Evaluation Comment 1(2):1-2.
Cope, Bill and Mary Kalantzis. 2015. "Assessment and Pedagogy in the Era of Machine-Mediated Learning." Pp. 350-74 in Education as Social Construction: Contributions to Theory, Research, and Practice, edited by T. Dragonas, K. J. Gergen, S. McNamee and E. Tseliou. Chagrin Falls OH: Worldshare Books.
Cope, Bill and Mary Kalantzis. 2016. "Big Data Comes to School: Implications for Learning, Assessment and Research." AERA Open 2(2):1-19.
Cope, Bill and Mary Kalantzis. 2017. "Conceptualizing E-Learning." Pp. 1-45 in E-Learning Ecologies, edited by B. Cope and M. Kalantzis. New York: Routledge.
Kalantzis, Mary and Bill Cope. 2012. New Learning: Elements of a Science of Education. Cambridge UK: Cambridge University Press.
Kulik, Chen-Lin C., James A. Kulik and Robert L. Bangert-Drowns. 1990. "Effectiveness of Mastery Learning Programs: A Meta-Analysis." Review of Educational Research 60(2):265-99.
Siemens, George and Ryan S J.d. Baker. 2013. "Learning Analytics and Educational Data Mining: Towards Communication and Collaboration." Pp. 252-54 in Second Conference on Learning Analytics and Knowledge (LAK 2012). Vancouver BC: ACM.
@Dr. Cope, this is my first course, the learning experience is excellent. The Scholar analytic dashboard allows me to visualize my performance in each dimension, which reminds me to gear up towards strategically towards the course goals and my personal learning goals. The most significant of all, the reading, comment, update exercises, together with the peer reviews, self-reflection/reflexive exercise have provided me with very distinguished instructions to perfect my personal professional learning development. Scholar nurtured me with the habit of daily reading, thinking and writing. Thanks much!
Intersting as a Self assignment and self monitoring tool for our learning strategy and outcomes. I like that idea.
wow excellent. Taking different Coursera's and other online courses providers I have had the opportunity to experience different type of online assessments tools. As a student it was engaging and motivating to be able to see my progress, how far away from my achievement I was etc. Lately I finished a course where after every quizz and/or assessment I was immediately receiving feedback in the way of materials /text books etc suggested to me to review according to my performance/ answer to each question and explaining the why and that was fantastic. As a teacher this learning analytics /aster plot it gives me a clear and immediate visualisation of my student's needs then supporting the best and immediate feedback to them..
Regarding peer reviews and from a student's perspective I have had my doubts about the reviews , scores, comments the peers leave but at same time I have realised how enlightening is to read the reviews made by peers as it allows to see things from a different perspective and supports critical thinking. Potentially a peer review could take the reviewed to post an open discussion about the review stating their own perspective, this would further support learning.
@Ryan Bartelmay, I take your point about scores - on the other hand, we're trying to get both forms of judgment, not just to satisfy our thirst for analytics (!), but also now ubiquitous on the web, to have comments and ratings together, no hopefully not too outside the norm of expectations.
@Dr Cope. Thanks for the detailed theoretical underpinnings of the analytical tool. I'm encouraged that tools such as this will allow education to shift the paradigm away from uniform instructional delivery to a more learner-centered approach, which is clearly at the core of Scholar and its functionality. Bravo. I'd use this tool if available and can see its vast utility. Question: the article mention an isolated learning experience, but can a series of learning experiences be combined together to provide an analysis of a student's progress longitudinally through grouped together learning experiences (i.e a class or a series of classes like a program curriculum)? I can see real value in allowing students, say college students, tracking their progress against program outcomes longitudinally across a degree major. In fact, to further your idea of credit, perhaps the credits needed to be earned for a degree are earned through mastery rather than allocated to courses by institutions because of some institutional credit hour equivenency that's shackled to seat time and clock time. This comment is veering the conversation toward CBL (competency-based learning), but given that a college tuition and the college-degree is shackled to the credit hour, I can see a tool like this providing affordances that allow for pragmatic and necessary conversations about the cost and value of a college education.
On another note, @Edgar and @Dr Cope, I agree that creating a rubric is very hard, and like Edgar, I have some issues with the construction of our class rubric. That being said, I wonder what would happen if the numeric scale was removed from the rubric? Or, at least, maybe the number rating assigned by the peer is not revealed to the creator of the Work. I think the battle we are fighting against is a culturally conditioned response to scores (e.g grades) that has been engrained into many of us. Attaching a score to a criteria causes some to interpret the narrative feedback through the lense of a score that's been assigned, let's face it, subjectively. No matter how well the rubric delineates the calibration for each numeric score, there will still be subjective scoring by the peers and it will be nearly impossible to achieve inter-rather reliability within a peer set and among other peer sets. I think this is ok, though. Ultimately, the peer awarded scores are probably more valuable to the peer reviewer, since they have to reflect on the score in relationship to the tool's analytical calibration, their own work, and other work they are reviewing. Since the numeric score is being aggregated into the aster plot, perhaps, its inclusion in the feedback to the creator is unnecessarily creating motivational and emotional barriers that are impeding the creator from reflecting on the narrative feedback and annotations. Anyway, just something I've been thinking about. Curious to hear your thoughts.
Risk assessments are very important as they form an integral part of an occupational health and safety management plan. They help to: Create awareness of hazards and risk. Identify who may be at risk during victim shifting inf security risk aream
I've never had a grading rubric as unique as this before and I'm not sure if I can ever go back to the traditional grading rubric. The aster plot design reminds me of standards based grading quite a bit. I'm able to see my overall grade in the class, but also the individual components that make up my grade. It allows me to see what areas i need to work on and what areas i'm doing well in.
Seeing something like this is very innovating and exciting for the education field. I hope something similar like this can be applied to standards based grading to help collect research data.
I'm so happy that this element has been added to Scholar. After participating in several courses last year, I always wondered where I was at in terms of grading and progress and it was difficult to feel confident about the work that I was completing without this type of feedback. I've found this course much easier to manage now that I can easily track my progress and see the different facets that compose my overall grade. Thanks for working on this, Dr. Cope!
@Edgar Roca, thanks for these comments, and rubrics are always a struggle - and particularly the level of generality at which they are pitched.
Edgar Roca
I loved this detailed explanation of the Analytics tool in Scholar! The creation of a dynamic, always-there, visual rendition of your progress towards mastery also provides the learner with a motivational instrument that maps in a comprehensive and logical way the journey to the learning goal. I guess that the different petals can be customized to represent different dimensions of learning as these are surely not the same in every course of study, correct?
On a somewhat less important detail, I would think that the ancillary tools that would accompany the aster plot need to be carefully weighted and properly integrated: for instance, the rubrics to assess different products and practices in the course. You mention item-based assessment; rubric-based reviews (peers, self, teacher); online discussion contributions etc. I love how this tool is used in this course, but I have some reservations in regard to the design of the rubric used for the work reviews. I still think that the typical educational practices that support rubrics that are SMART (specific, measurable, attainable, relevant, and time-bound) could be considered in the specific use of this course. Many of us, not just me, have expressed that some of the criteria for full success are too high and lofty, almost impossible to attain within the frame of the class. I am sure that the similar reservations could arise from other elements in the flower, but to me, this is the only one I have.
I think that this method of assessing performance, in spite of my personal opinion about the rubric, is nothing short of brilliant!