Schools put a lot of effort into surveys: alumni surveys, course surveys, faculty surveys.

An article in yesterday’s Chronicle summarizes work done at Cornell University to study the effectiveness of surveys of student engagement. Here’s the main take-away:

Their paper examines response rates of Cornell’s class of 2006 as the students progress through the university. In the fall of 2002, the authors say, 96 percent of first-time, full-time freshmen responded to the Cooperative Institutional Research Program Freshman Survey, a paper-and-pencil questionnaire administered by the Higher Education Research Institute at the University of California at Los Angeles.

But in similar surveys, given online in the students’ freshman, sophomore, and junior years, the response rates were 50, 41, and 30 percent, respectively. A final survey of graduating seniors collected data from 38 percent of them.

Those who completed the follow-up surveys were predominantly women, the Cornell researchers say, and they had higher grade-point averages than those who did not respond.

Surveys are easy: a relatively small number of people (often one) can administer online surveys to thousands of students, then collect the data. Other forms of assessment are much more time consuming and require culture change, the mustering of resources etc. So while the community of survey specialists worry about ‘survey fatigue,’ whether students are completing surveys after 9pm (when they could be ‘partying’), and other questions familiar to most marketing executives, our institutions are increasingly dependent on this single data source for major decision making.

The statisticians argue that something is better than nothing, and that they can control for all kinds of oddities. According to the article, 30% is an acceptable response rate. What is stunning to us about the report is that it is considered news. Every institution has this kind of data: the ability to bounce student attributes against survey data (and other data). But, in general, student evaluations are taken in a very literal way (as anecdotal evidence, without context) and are used to make major decisions about tenure and curriculum redesign.

Do results like Cornell’s invalidate the process? Not at all – but they should cause changes to the survey process (the goal of outcomes assessment: to improve processes through rigorous analysis). Similarly, the vast amount of data available in a school’s back-end database (Banner, Datatel) should be put to much greater use. How much variability in grade inflation occur in particular courses? Which courses/programs receive the best course evaluations? This data leads to questions that would help improve outcomes while addressing some of the incoherence students experience. Tying these various data streams together would help build a complete picture. So if a teacher gets slammed for being too ‘hard’ on course evaluations? Perhaps they are grading significantly harder than their colleagues (which doesn’t necessarily mean they should be the ones to change!). This sort of inquiry is second-nature to most academics; we just don’t apply our analytical and research skills to our most important undertaking: teaching.