We’re just finishing up three day’s at NC State’s Assessment Symposium. 500 educators from around the USA have come together to talk about student learning, “closing the loop,” and accreditation.

Many of the sessions are focused not just on data-gathering, but on teaching and learning. A number of attendees have talked about the change they’ve seen since even last year: a focus on bringing assessment into the process of teaching (!). That is, avoiding the mad dash to develop data just for accreditation that often results in two databases of student learning outcomes. One presenter said that on her campus administrators referred to the “shadow database,” which reminded me of a business owner keeping two sets of books – one for the IRS and one for the real world.

We gave a 60 minute presentation on best practices in course-embedded assessment. We must have had at least 50 people in attendance…not to learn about Waypoint as much as to gain insight into how schools execute.

I spoke in three general areas:

  1. Getting faculty help with the challenges of formal assessment
  2. “Closing the loop” – using data to inform changes in curricula
  3. Using a sampling approach to gather data quickly and efficiently for benchmarking purposes

Getting faculty help with the challenges of formal assessment:

We increasingly talk to senior administrators about the need to look at authentic assessment and course-embedded assessment as more than a challenge in software training. This work is not about clicking the right buttons in Blackboard or Waypoint. Too often a roll-out plan is based on a two hour “hands-on” training session, where faculty watch an outside consultant or instructional technologist walk through basic functionality. This approach only works with the people who don’t need it, the types who can kick the tires, click around, and figure software out.

Constructing knowledge is much more powerful than having it pushed at you. So the two best-practices that I highlighted, one from a private university and the other from a state university, are essential to any real culture change.

The first approach is about instructional technologists working with faculty – in faculty offices, on faculty schedules, to develop assessments (Waypoint or otherwise). The instructional technologists, more facile with web-based tools perhaps, can help with the work while also bringing a structured methodology to rubric and assessment design. And after a personal meeting or two, the faculty member has something they can use. After a training session, faculty usually have a headache.

The institution in question has built this collaborative work around a writing across the curriculum initiative, and for the first time a technology and teaching initiative has caught on in all four major academic divisions at the same time, which is a first.

The other “helping faculty” approach is an institution’s commitment to a three day course to teach and implement rubrics. Three days. Faculty have to write a short proposal to be considered, and space is limited. The department head has to sign off on the application, which puts the process on the radar for annual reviews (smart move). There is even a $200 stipend to be used for professional development activities.

All participants have been given Arter and McTighe’s book, Scoring Rubrics in the Classroom. The goal is to develop an understanding of rubric design, and then to build detailed rubrics – both on paper and then into Waypoint. The seminar is being taught by faculty at the school, not outsiders.

For me, these are two concrete and attainable ways that many institutions could take a longer-range approach to supporting faculty in their development of rubrics, outcomes data, and most importantly, the improvement of their own teaching.

Once I get back to the office, I’ll post the second and third areas that we discussed: “closing the loop” and gathering data quickly via sampling.