Analyze an assessment practice. This could be a description of a practice in which you are or have been involved, or plans you have to implement an assessment practice, or a case study of an interesting assessment practice someone else has applied and that you would find beneficial to research and analyze. Use as many of the theory concepts defined by members of the group in their published Work 1 as you can, with references and links to the published works of the other course participants.
A concept map is defined as "a graphical representation of the relationship among terms” and concepts to elevate knowledge (Vanides, et al. 2005). This practice is popular in both secondary and higher education, and has been tested with students as young as seventh grade (Schau). Hay, Tan and Whaites (2010) state that it is a successful tool in adult learner education as well. They note that producing concept maps "represent the transformation of abstract knowledge and understanding into concrete visual representations that are acquiescent to comparison and measurement.” This practice originated with Cornell University professor Joseph Novak in 1972. Novak defines concepts as “a perceived regularity in events or objects, or records of events or objects, designated by a label. The label for most concepts is a word." When constructing a concept map, students define relationships between concepts with the use of propositions, statements that link two concepts, objects or events together in a meaningful way (Novak 1).
As Dunn notes in The Power of Mapping: Using Concept & Mind Maps for Assessment, Novak’s original concept map was based off of the principles of David Ausubel’s Assimilation Theory. According to Novak and Canas:
“learning takes place by the assimilation of new concepts and propositions into existing concept and propositional frameworks held by the learner. This knowledge structure as held by a learner is also referred to as the individual’s cognitive structure. Out of the necessity to find a better way to represent children’s conceptual understanding emerged the idea of representing children’s knowledge in the form of a concept map. Thus was born a new tool not only for use in research, but also for many other uses.”
While concept mapping originated as a learning tool, the practice has become more and more common as a form of assessment. Novak draws connections between concept mapping and Benjamin Bloom’s taxonomy of learning, noting that due to the “high levels of cognitive performance, namely evaluation and synthesis of knowledge,” two of Bloom's higher orders of thinking, concept mapping is an ideal form of assessment. Concept mapping focuses on the evaluation and synthesis of concepts, specifically the ways they relate to each other.
Though concept maps are used across content areas, science education is the most popular course area used, due to the complexity of the curriculum’s concepts, the high quantity of concepts students must learn, and the general student perception that science is difficult (Bramwell-Lalor and Rainford, 2013). As Bramwell-Lalor and Rainford note, "[t]he difficulties students encounter learning these abstract concepts have been further exacerbated by inappropriate teaching and assessment techniques which do not facilitate the development of higher-order thinking and conceptual change.” Concept mapping as a form of assessment allows for higher-level thinking of abstract concepts that other common forms of assessment, like multiple choice tests, can rarely achieve.
The most common grading techniques used to score concept maps are manual scoring, through various types of rubrics to assess concepts covered and proper proposition creation. Concept mapping can be scored both quantitatively - points awarded on the number of connections made - and qualitatively - points awarded based on the accuracy of each connection (Yin et al. 168). Vanides et al. suggest a simple color coded rubric for a "quick impression" of student thinking. Traditionally, students construct concept maps by hand in their own words. However, a practice that has been tested in various settings over the past decade is the use of pre-built concept maps in which the students must fill in the blanks of a variety of open spaces on the map (Yin et al. 2005; Schau et al. 2001). This type of assessment, often called Select and Fill In (SAFI) allows for more efficient grading (Yin et al.) and less pre-assessment preparation on the part of the instructor.
Two research studies - Schau et al. (2001) Select-and-Fill-In Concept Map Scores as a Measure of Students’ Connected Understanding of Science and Yin et al. (2005) Comparison of Two Concept-Mapping Techniques: Implications for Scoring, Interpretation, and Use - focus on testing the effectiveness of select-and-fill-in (SAFI) concept maps as compared to student-constructed maps (Yin et al.) and multiple choice assessments (Schau et al.).
Schau’s study actually consisted of two compared test groups; one of eight grade chemistry students, and another of undergraduate astronomy students. The study noted that there are three limitations to student-constructed concept maps as efficient means of assessment:
“First, students must learn how to draw concept maps and then actually draw them, processes that are time-consuming and can be tedious and frustrating. Indeed, some students (and instructors) do not like to and so will not draw concept maps... Second, there is no universally accepted and simple scoring system for generated concept maps...Third, the quality of student-generated maps depends heavily on the individual’s communication skills.”
As a means to address these limitations, Schau’s SAFI assessment technique bypasses the need to draw concept maps, and instead has curriculum experts create the maps as templates for the students to work within. To score the concept maps used in the study, each SAFI response is scored as correct or incorrect, creating a simple scoring system. Finally, since all response options are selected from a set of specific answers and distractors, students’ communication skills do not come into play. The study found high correlations between SAFI technique concept map assessment and the traditional and familiar multiple choice assessment, proving that both assessment types assess similar, if not the same, kinds of knowledge.
In Yin’s study, use of pre-built, select-and-fill-in maps were compared to student-constructed maps in an eighth grade physics lesson. Student-constructed maps were created from scratch using post-it notes and pencil and paper for the first draft, and polished for a final draft. These maps are created using students’ own language. Before conducting the study, students were trained on how to create concept maps. The results of Yin et al.’s study note that the two assessment types are not equivalent. With constructed-map assessments, students were able to receive partial credit for partially correct answers, while SAFI assessments could only be scored as correct or incorrect. Furthermore, the study notes that the SAFI assessment technique “prevented the students from fully expressing their knowledge, especially their partial knowledge” and that the constructed-map assessment technique allowed students to create more complex structures. The SAFI assessment technique was thought to slow down map construction, “constrain the students’ choices and prevent [them] from setting up relationships available and interesting to them.”
What stands out most in the difference between these two mapping types is how the students interact with them before creating their propositions. As seen in the figure below, SAFI assessment consists of an extra “checking” stage due to the fact that students aren’t creating the relationships between concepts themselves. This could be due to the fact that the language of the provided propositions does not always match that of the student. Furthermore, students also noted that they felt impeded by the limited responses available to them when filling in a SAFI assessment.
Both of these studies prove that a more efficient scoring system for grading concept maps could be an effective assessment tool. As they exist in these studies, there are significant differences in these types of concept maps, specifically what they assess (Yin et al.). Whereas a student-constructed map allows the teacher to see more partial knowledge and knowledge processes; the SAFI map does not allow for partial knowledge or process, and instead focuses on the final product. However, with further consideration and more nuanced preparation, I think SAFI maps could provide more detailed feedback on student knowledge, specifically with the use of nuanced distractors that target common misconceptions and partial knowledge.
While the desire for more efficiently scorable concept maps is strong, the costs of fully replacing traditional concept mapping with SAFI mapping are not worth the price of deep student learning. As Yin notes,
"Automatic scoring of such an open-ended task would require the development of a very large (and adaptive) database of possibilities, with rater intervention when new phrases emerged. The practicality of such an approach for large-scale assessments is doubtful, although not impossible."
To make grading free-form maps more efficient, a potential solution would be to use peer assessment as a way to alleviate the pressure of grading on the instructor. Vanides suggests students work in small gorups to "find similarities and differences in their maps and try to reconcile them. Group discussions provide opportunities for students to engage in the social aspect of science, where they can articulate their thoughts and learn from each other." Free concept mapping software such as Inspiration Maps and Coggle, allow for online collaboration and sharing between peers, making technology-assisted small group or partnered review a potential partial solution to the slow process of grading free-form concept mapping
Due to the product-oriented (rather than process-oriented) assessment that SAFI maps foster, they are much more appropriate as solely summative assessment rather than formative.Yin notes that the SAFI technique is comparable to larger-scale, multiple choice assessment: "multiple-choice tests still play an irreplaceable role in large-scale assessment, although they are criticized for missing important aspects of students' achievement.” For example, one aspect of concept mapping that SAFI testing loses is the importance of assessing the map's structure. According to Vanides, "Experts and highly proficient students tend to create highly interconnected maps, whereas novices tend to create simple structures that are linear, circular, a hub with spokes, or a tree with few branches." SAFI maps, where the structure is already built, lose out on this important assessment perspective regarding student comprehension of complex relationships between ideas.
With both of these forms of concept map construction in use, instructors could feasibly use student-constructed concept mapping as formative assessment, and thus use this data, as well as other in-class, unobtrusive and observational assessment to indicate student progress and identify preconceived notions and misconceptions made in the learning process. Instructors can then tailor summative assessment SAFI mapping with appropriate standards-based vocabulary and common distractors to meet the needs of the specific set of students. While SAFI assessment is meant to be a much more efficient form, it is important that it is not used in isolation. Without free-form concept mapping practiced throughout the course to foster creativity and create strong mental connections between ideas, the summative SAFI mapping assessment would do little to foster higher-order learning.
Bramwell-Lalor, Sharon and Marcia Rainford. "The Effects of Using Concept Mapping for Improving Advanced Level Biology Students' Lower- and Higher-Order Cognitive Skills." International Journal of Science Education, 36.5, 839-864. Web. 10 Oct. 2014.
Dunn, Stephen. "The Power of Mapping: Using Concept & Mind Maps for Assessment." 22 Sept. 2014. .Web. <https://cgscholar.com/community/profiles/stephen-dunn/publications/44465>. 28 Sept. 2014.
Hay, David B., Po Li Tana and Eric Whaites. "Non-traditional learners in higher education: comparison of a traditional MCQ examination with concept mapping to assess learning in a dental radiological science course" Assessment & Evaluation in Higher Education 35.5 (2010): 577–595. Web. 10 Oct. 2014.
Novak, Joseph D., and Alberto J. .Canas. "The Theory Underlying Concept Maps and how to Construct and use them, Technical Report IHMC CmapTools." Institute for Human and Machine Cognition (2006-01 Rev 01-2008): 1-36. Web. 28 Sept. 2014
Schau, Candace, Nancy Mattern, Michael Zeilik, Kathleen W. Teague, and Robert J. Weber."Select-and-Fill-in Concept Map Scores as a Measure of Students’ Connected Understanding of Science." Educational and Psychological Measurement 61.1 (2001): 136-158. Web. 28 Sept. 2014.
Vanides, Jim, Yue Yin, Miki Tomita, and Maria Araceli Ruiz-Primo. "Using Concept Maps in the Science Classroom." Science Scope 28.8 (2005): 27-31. Web. 28 Sept. 2014.
Yue Yin, Jim Vanides, Maria Araceli Ruiz-Primo, Carlos C. Ayala, Richard J. Shavelson. "Comparison of Two Concept-Mapping Techniques: Implications for Scoring, Interpretation, and use." Journal of Research in Science Teaching 42.2 (2005): 166-84. Web. 28 Sept. 2014.