mobile navigation

Course Evaluation Framework

Validity & Reliability of Sheridan's Course Evaluation Framework & Questions

The design and development of Sheridan’s new course evaluation framework – initially undertaken at the University of Toronto – was informed and guided by an underlying commitment to an evidence-based perspective, placing emphasis on theoretical and empirical evidence to support all decisions related to its overarching assessment structure and to its survey development. Thus, great emphasis was placed on ensuring that attention was paid to important design processes that promote key aspects of validity (e.g. content and construct), which, most importantly, involved establishing the space, at the institutional level, to determine a clear, contextual understanding of what defines good teaching; and more importantly, what defines positive learning experiences for students at the institution. View a brief and highly useful review on the elements required to help ensure validity and utility in course evaluation surveys.

At Sheridan, this was accomplished through extensive consultation with all stakeholders, including senior administration, Deans, ADs, faculty, staff, and union representatives. This thorough and iterative design approach of the core items relied on both contextually relevant and empirical “expert” informed design, which has been shown to be one of the most effective approaches in designing assessment tools for the evaluation of teaching (Spooren, Brockx, & Mortelmans, 2013).

The design and development of the item bank (from which Professors can select additional questions), which was undertaken at the University of Toronto, adhered to similar design and development principles, with the understanding that this bank would be used to provide professors with items that they could add to their course evaluation forms in order to provide more granular, formative feedback. Item bank themes were generated in consideration of the vast theoretical and empirical literature on course evaluations, including best practice recommendations concerning course evaluation content areas that have been demonstrated empirically difficult for students to evaluate (Gravestock & Greenleaf, 2008). From there, item bank themes were operationalized into specific content-focused items, paying attention to survey methodological best practices regarding survey item clarity, specificity, and tone (DeVillis, 2003; Krosnick & Presser, 2009). Throughout this process, professor feedback on item theme categories led to the inclusion of additional, thematically granular categories, which contain items that can be used to evaluate specific course-level instructional experiences, techniques, and opportunities for learning. Note also that additions to the item bank are planned which will further enhance the utility of the product in Sheridan’s context.

The following document provides additional statistical support for the validity and reliability of Sheridan’s core course evaluation items. These analyses are based on samples within Sheridan’s context; and whereas they do provide substantial evidence to suggest that the framework’s course evaluation items relate meaningfully well, the data are indeed limited to this context and will benefit from continued analyses as our institutional partners employ the framework within their contexts. Within these pages, analyses focus on testing the core items’ inherent, related structure, their interrelatedness, and their convergent and concurrent relationships with Item Bank items. Download the document.

For reference purposes, below are Sheridan’s core quantitative course evaluation items, which are the items used across the College for all courses.

Course Evaluation Items

Note: Content used in this document is adapted with permission from the University of Toronto’s Course evaluation guidebook and resource manual (Rolheiser, C., Werhun, C. D., & Gravestock, P. (2017).


References

DeVillis, R. F. (2003). Scale development: Theory and applications. California, USA: Sage Publications Inc.

Gravestock, P. & Gregor-Greenleaf, E. (2008). Student course evaluations: Research, models and trends. Toronto: Higher Education Quality Council of Ontario.

Krosnick, J.A. & Presser, S. (2010). Question and questionnaire design. In P. V. Marsden & J. D. Wright (Eds.), Handbook of survey research (pp. 263 - 313). United Kingdom: Emerald

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598 - 642.

Notices icon

Attention:

G Wing Elevator – Out of Service, Trafalgar more