Assessing Ourselves: Inter-rater Reliability Issues in Assessing Student Learning Outcomes

Abstract

Student learning outcomes assessment requires two levels of assessment: assessing student work and assessing how well the program is assessing the students’ work. In a professional master’s degree program, a rubric was created for assessing student work on achievement of program-level educational outcomes. That rubric was used from 2018-2023 (six years). Over that time, the inter-rater reliability scores varied widely. One issue is that student work is designed to demonstrate achievement of course learning objectives, and the Assessment Committee needs to apply the rubric to assess that student work for achievement of program learning outcomes, which differ from the course learning objectives. Another is that multiple types of reviewers have used the rubric over the past six years: members of the committee who are full-time department faculty, as well as members of the department’s advisory board, some of whom are part-time department faculty and some of whom have no faculty experience. This paper reports a secondary analysis of inter-rater reliability scores for the rubric, with recommendations from the literature on how to improve reliability in assessing student learning outcomes.

Presenters

Lauren Mandel
Associate Professor, Graduate School of Library and Information Studies, University of Rhode Island, Rhode Island, United States

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

Assessment and Evaluation

KEYWORDS

Learning Outcomes Assessment, Rubrics, Graduate Education, Higher Education, Online Learning