Understanding Test Item Quality

Y10 4

Views: 256

All Rights Reserved

Copyright © 2011, Common Ground Research Networks, All Rights Reserved

Abstract

This study explores the understanding concerning the quality of test items. The main emphasis is on the developmental procedure of unbiased test items using item analysis of model papers. The data was collected from ninth grade students of three different districts of Punjab by administering an achievement test in the area of chemistry. Item characteristics, item difficulty, item discrimination indices and distractor and differential item functioning (DIF) analyses were carried out using different computer software, like SPSS, ITEMAN, BILOG and Mental-Haenzal procedures. This software was used to categorize rejected, poor, good, and biased items. These indices were compared using Classical Testing Theory (CTT) and Item Response Theory (IRT). Our analyses suggest which items are weak and which items need to be revised. IRT based analysis identified that 14 items were moderate, 18 items were difficult and no items were easier; 38 items were good discriminators. Mental-Haenzal procedures identified that 15 items across both genders and 25 items location-wise were biased.