Assessment in STEM Problem-Solving

Work thumb

Views: 395

All Rights Reserved

Copyright © 2022, Common Ground Research Networks, All Rights Reserved

Abstract

Problem-solving related to science, technology, engineering, and mathematics (STEM) is a core criterion in education in dealing with the challenges of the twenty-first century. Success in STEM problem-solving can be measured through assessment. However, lack of information regarding a review of assessment in STEM problem-solving is a major problem. This study aims to provide a systematic overview of the trend assessment used and detailed information related to topics, participants, and frameworks in STEM problem-solving. A Preferred Reporting Items of Systematic Reviews and Meta-Analyses (PRISMA) model was used to extract articles published between 2010 and 2020 concerning inclusion and exclusion criteria. The selected articles are analyzed in relation to bias regarding risk and categorized based on disciplines and learning domains. A total of seventy-seven articles were organized according to these criteria, mostly in the science and mathematics areas conducted in middle school students. It was found that the most common assessments in monodisciplinary and interdisciplinary areas were an essay test (n = 34) and a complex problem scenario test (n = 11), respectively. Program for International Student Assessment (PISA) content was widely used in STEM problems. Engineering design and technology served as an integrator in cognitive interdisciplinary and transdisciplinary STEM problem-solving assessments. Questionnaires and in-depth assessments were also used, primarily to measure metacognitive and affective factors in monodisciplinary and interdisciplinary areas. The issues of sample size, specification of assessment, framework, challenges, drawbacks, and advantages will be discussed. This study supports future research in suggesting the best assessment in STEM problem-solving.