What do students get for investing four to six years in higher education? The opportunity cost, the tuition dollars, the resulting debt?
What is the return to taxpayers footing billions of federal and state dollars for loans, grants, and outright subsidies?
How can consumers and businesses measure the quality and value of a degree?
These aren’t just billion dollar questions; they are trillion dollar questions once you consider the impact to GDP over the next twenty years.
Lets review the various ways various stakeholders currently attempt to measure quality and value starting with Widely Accepted Proxies for Quality:
Grades
Grades certainly motivate student behavior, although often it’s of the “how do I scrape by with a ‘B’” variety. We all know, however, that grades are poor stand-in for achievement. Einstein failed the entrance exam to university and Steve Jobs had a 2.65 high school GPA. For the rest of us non-geniuses a decent GPA is perhaps necessary, but not sufficient, to earn a decent job out of school. From a broader perspective, the very idea of grades, that they only validate the ability to operate within a system and that grade inflation is rampant make it difficult to accept grades as any sort of proxy for quality.
Survey Data
If we can’t depend on the actual educational process and its chief form of assessment, perhaps we can simply ask students if they’re learning anything. Whether at the Course Evaluation level, which often devolve into popularity contests, or alumni level, this data can hardly be depended on to guide decision-making. RateMyProfessors.com is more interesting, but this division of MTV/Viacom proudly consider “easiness” and “hotness” to be important characteristics in a teacher. Their list of top schools, as voted by students, which places Stanford at #1 but the University of Memphis at #2, is a welcome antidote to US News & World Report’s rankings but isn’t granular enough or rigorous enough to be of use in decision making. Rate My Professors is already a major force in higher education and its ratings haven’t affected the quality of instruction.
Rankings
See Malcolm Gladwell’s take-down of this useless and self-serving (to elite, rich institutions) process.
Accreditation
Arguably the most formal attempt at quality control in higher education is the accreditation system, both regional and professional. This is a peer assessment process. Basically self-policing. Regardless of the quality or depth of such policing, every institution is accredited, just about. And if everyone is accredited, if every MBA program within 20 miles of a city like Philadelphia is AACSB-accredited, then how do we differentiate? To quote Dash from The Incredibles, “Saying everyone is special is the same as saying no one is.”
Salary Data
There is some interesting work going on here, at least for those who equate quality and value with getting a decent paying job. Funded by the Lumina Foundation and others, CollegeMeasures.org is working with some states to connect salary data to graduation information – at least for graduates of state institutions who stay in-state and file a state tax return. There are too many asterisks next to the data to be of use and it can’t control for the drive and persistence of individuals to be successful vs. what they got from a college education.
Retention and Persistence
Many (including President Obama) are pushing for a simplistic view of higher education: the more students we graduate, the better. And if we can do it more cheaply, helping more than 58% of incoming freshman graduate in better than six years (the current status), all the better. The Chronicle of Higher Education has a beautiful, interactive website devoted to documenting the problem. But if we simply measure throughput, and other quality measures are weak or nonexistent, then we’ll end up sacrificing rigor and learning for achieving numbers. At a recent presentation, the president of a community college system (~100k students) defined success by the number of As and Bs awarded (88%, so 12% received less than a B) and raising the average number of credits completed per term to 5.5 (from 3). This is the kind of thinking that got GM into such trouble in the 70s.
There are broader, more creative ways of measuring the value of education. For instance, the mortgage crisis, state lotteries, and the level of discourse in Presidential debates could all be seen as verdicts on our education system’s ability to teach critical thinking, information literacy, and communication skills. But while cynically correct, these conclusions don’t help us choose institutions, reward successful initiatives, or understand how to improve education.
LinkedIn, arguably, has the best potential view over education and success. Leaving aside the veracity of information (lets waive our magic wand and assume they add a paid verification/credentialing capability), LinkedIn has enough data on the trajectories of successful people to create rich visualizations of what happens to you if you study X at Y institution. I’m not privy to their roadmap, but if I were at LinkedIn I’d be thinking very deeply about this power, and how to validate learning on a grand scale.
Higher education is, with some exceptions, still a craft endeavor. That is, individual artisans (i.e. professors) create courses and materials to share knowledge with their students. Like any hand-hewn object, quality is all over the map. Most of us are lucky enough to remember one or two gifted professors who presented material, even material we weren’t originally that interested in, with such passion and wit that we fell in love with the subject matter. They cared about students and were available for guidance and conversation outside of class. They were actually available during office hours.
A craft system does not provide any mechanism for assuring quality. This isn’t a criticism. Just a fact. If higher education, though, is going to answer the questions we’ve posed, and the traditional answers to “how are our students doing” don’t have much value, where can we look for better models?
Bringing up ‘manufacturing’ in the same conversation as education is a dangerous move.
Everyone knows our education system was designed 150 years ago to create workers for the industrial revolution – for factories. Rote memorization, standardized testing – it all plays into our worst nightmares of factory education. But crafts, in manufacturing, gave way long ago to mass production, which in turn gave way to mass customization and lean manufacturing. Shoddy quality and worker-drone jobs, depicted by Charlie Chaplin in Modern Times, gave way to high quality systems capable of creating extremely high quality products in near infinite variations. Manufacturing understands quality, and that you can’t inspect quality into a system.
So how would a manufacturing expert look at higher education? Because we can’t control our inputs – students are not raw material – and motivation and ‘grit’ play a huge role in success, we have to look to measure those aspects of education that institutions can reasonably control: courses and course design, quality and rigor of assessments, and teaching effort.
The accreditation process could address these aspects of university life, but it doesn’t. And it won’t – we cannot expect institutions to require dramatic change of themselves. The only forces powerful enough to help large institutions change are market forces.
So my proposal focuses on what, in retail, we would call “mystery shoppers.”
An organization dedicated to creating definitions of quality for the various aspects of university life (quality of materials, teaching, assessments/rigor, advising, financial aid processes, campus/online life) could train a small army of reviewers who would enroll in university courses to assess them deeply. A sampling approach, common in manufacturing, would define the required volume to rate an institution against these various important measures.
These mystery shoppers would need to be fully matriculated students, which would bring its own challenges. Perhaps current students at institutions could be trained in a methodology and their evidence reviewed for accuracy. Highly selective universities would perhaps escape assessment, at least for a while.
The resulting dashboard of ratings, and public evidence (everything from videos showing professors skipping their office hours, to fantastic lectures, to incoherent emails and course websites), would revolutionize education.
Who will pay? For the amount of money being thrown at education right now this question is the least of anyone’s problems. But the energy and interest around US News and World Report’s rankings suggest there is a business model that could work. I’ll investigate the practicality of such a business model in a subsequent post.