Rubric Development and Inter-Rater Reliability Issues in Assessing Learning Outcomes

Authors

  • James A. Newell Rowan University
  • Kevin D. Dahm Rowan University
  • Heidi L. Newell Rowan University

Abstract

This paper describes the development of rubrics that help evaluate the student's performance and relate that performance directly to the educational objectives of the program.  Issues in accounting for different constituencies, selecting items for evaluation, and minimizing the time required for data analysis are discussed.  Aspects of testing the rubrics for consistency between different faculty raters are presented, as well as a specific example of how inconsistencies were addressed.  Finally, a consideration of the difference between course and programmatic assessment and the applicability of rubric development to each type is discussed.

Author Biographies

James A. Newell, Rowan University

James Newell is Associate Professor of Chemical Engineering at Rowan University. He is currently Secretary/rreasurer of the Chemical Engineering Division of ASEE. His research interests include high performance polymers, outcomes assessment and integrating communication skills through the curriculum.

Kevin D. Dahm, Rowan University

Kevin Dahm is Assistant Professor of Chemical Engineering at Rowan University. He received his PhD in 1998 from Massachusetss Institute of Technology. Before joining the faculty of Rowan University, he served as Adjunct Professor of Chemical Engineering at North Carolina A&T State University.

Heidi L. Newell, Rowan University

Heidi Newell is the Assessment Consultant for the College of Engineering at Rowan University. She holds a PhD in Educational Leadership from the University of North Dakota, a MS in Industrial/Organizational Psychology from Clemson University, and a BA in Sociology from Bloomsburg University of Pennsylvania.

Downloads

Published

2002-07-01

Issue

Section

Manuscripts