It is now widely accepted that the job of college is to get you a job. While schools are talking an employability game they have done little to change their underlying processes and programs. This is not a criticism. The conservative nature of universities is one of their greatest assets. But what if a leading indicator of higher education’s alignment with the 21st Century workplace was hiding in plain sight and could be easily used to benchmark current state and measure progress?
Fifteen years ago purposeful alignment between higher education and the workforce was an uncomfortable concept, at least for most faculty and administrators. Schools with unabashedly job-oriented missions, like my alma mater, Drexel University, were considered second class citizens in some circles. Those suspicious of such capitalistic missions believed higher education should focus on learning that rose above the needs of any specific career – the great books etc. Of course the professional schools, engineering, hard science, were allowed to be more proscriptive. But the overarching mission of the university was not ‘career training.’
No longer. One succinct illustration came in 2016, from Christopher B. Howard, Robert Morris University’s incoming president writing in The Washington Post: “It’s not exactly in our DNA in higher education to talk about return on investment, but we are going to have to start demonstrating it to parents, students and government officials from both sides of the aisle.”
What does it mean to “train” college graduates for the 21st Century workplace? The QA Commons, an initiative funded by Lumina, has an elegant and concise description of the required skills (emphasis mine):
- People skills such as collaboration, teamwork, and cultural competence
- Problem-solving abilities such as inquiry, critical thinking, and creativity
- Professional strengths such as communication, work ethic, and technological agility
The irony? A rigorous liberal arts education does a pretty good job of getting the above work done – maybe a little weak on teamwork and “technological agility,” but much closer than your average undergraduate business school management curriculum.
The new interest in career readiness isn’t coming from academic arguments that liberal education serves the needs of the 21st Century workplace, however. The change is driven by the rise in student debt, fueled by schools (like Drexel) that think nothing of a $70k/year total cost of attendance. Perhaps you get an annual $20,000 ‘scholarship,’ which parents may brag about at summer barbecues, but the total cost for a private institution, even with generous scholarships, is still $200,000 over four years. And who graduates in four years? The fear of this debt is what has forced the higher education conversation to focus on employability. The kids have to pay back those loans! And with 43% of recent college grads underemployed (defined by holding a job that doesn’t even require a college degree) and the proven damage such initial underemployment does to career and income for ten plus years, the stakes are clearly quite high. If employers want communication skills and technological agility and problem solving and critical thinking abilities why can’t an Economics major with a Biology minor get a good job? Are employers lying about these needs? Are schools not delivering on them? Is getting a job a completely irrational process based on network, luck, and deeply held biases?
Of course all three are true. But schools can only work on what they can control. And the best tool for aligning their curricula with the needs of employers already exists and can be put to use quite easily to benchmark current alignment and measure progress.
The humble course syllabus is a confluence of interests: the institution in defining standards and learning outcomes to satisfy accreditors and organize their course catalog; the teacher in setting out the work to be done by students and basic course logistics and policies; the student in understanding how they will be graded.
Much of undergraduate education revolves around low-level learning, particularly in the US. When we say “assessment” we really mean “multiple choice testing.” Which, by its very definition, means we’re largely assessing basic, forgettable facts and not the competencies that the workplace cares about: synthesizing, creating, persuading. Students tend to cram for assessments of this type, which is proven to hurt longer term recall especially if the information is not reviewed soon after the initial assessment and retested.
For instance, a recent small enrollment, online US History course at a large US Community College used a Pearson textbook that called out creating “an argument through the use of historical evidence” as a major learning goal. But all of the assessment in the course asked questions like:
Who invented interchangeable parts?
- Eli Whitney
- Cotton gin
- Francis Cabot Lowell
This question must be one of the easier ones, designed by psychometricians to warm students up and build their confidence. It doesn’t even spell “Cotton Gin” as a proper noun, giving away that choice 2 isn’t actually a person’s name (if it wasn’t already obvious). “Nativist” doesn’t seem like an answer to a “who” question…so we’re down to two choices.
Questions like this one, lifted verbatim from publishers test banks, are all over the web. So if an online student is taking a non-proctored quiz they can quickly find the exact question asked and its answer. That’s what Google search does.
The education-technology complex’s response to this undermining of education integrity? Build a proctoring infrastructure where humans and computers, sometimes in combination, watch students take low-level assessments of learning.
Certainly understanding Eli Whitney’s impact on the industrial revolution is important. Questions like this would be appropriate in a quick quiz, perhaps given at the beginning of a synchronous class as Dr. Scott Warnock champions. But in this online history course multiple-choice exams, delivered in a proctored testing center and stacked with such questions, constitute 80% of the class grade. The goal I shared above was Pearson’s goal, not the professor’s. In the course syllabus, under “Course Objective,” the professor included one line, stating that the course’s goal was to “survey” the history of the United States until just after the Civil War. Which, to be fair, is an accurate description of the course and its assessment methodology. It surveys. It does not attempt to to teach students to “[create] an argument” or develop a point of view, never mind “think like an historian.” There is nothing about this fourteen-week course that touches on any of the competencies described by the QA Commons and required by the 21st Century workplace.
This US History course might be a particularly negative example of higher education’s misalignment with workplace needs, the poster child for what Sir Ken Robinson would describe as a factory system preparing students to work in factories doing repetitive, low-skill tasks. While we want to pretend all schools are equal, perhaps a US History course taught at a highly selective liberal arts university would prove that there is a considerable difference in standards and therefore ROI for the student.
A search for example syllabi for upper-level business courses resulted in a complex document from a large, public university with a good reputation. 58% of the course grade depended on reflective journals, end-of-chapter written case analyses, and a 12-18 page research paper (not including title page, references, and Appendix). There was no teamwork, which is one of the harder skills for universities to teach in online classes even though a large minority of knowledge economy workers spend their days in video meetings, collaborating with teammates and partners they’ve never met in person and probably never will.
But 58% of the course grade based on writing and research and thinking? Now we’re talking!
A simple metric, then, at the academic program, college, or university level would be to crunch the data on course syllabi, which are readily available, and look at the percentage of assignments that require the skills all employers say they want. Presentations requiring original research. Written work that persuades and solves problems. Projects that require collaboration and ongoing meetings and divisions of labor. I would be very surprised if, over any academic major, this number rose above 20% at most larger private and public universities. If 120 credits are required to graduate then only 24 credits will have required this sort of work. Six or seven classes out of four years – which is probably generous.
And there is a catch.
Schools should be about learning, not rubber-stamping abilities that already exist. That’s where the ROI comes in, the justification for a $100,000 price tag and $60,000 of debt. These complex, authentic assignments cannot be just handed in at the end of the term and marked with a simple letter grade. Students cannot receive feedback focused solely on grammar and APA formatting skills – the bulk of the feedback on writing provided by faculty in all disciplines. To quote Grant Wiggins at length (emphasis mine):
What do I mean by “authentic assessment”? It’s simply performances and product requirements that are faithful to real-world demands, opportunities, and constraints. The students are tested on their ability to “do” the subject in context, to transfer their learning effectively.
The best assessment is thus “educative,” not onerous. The tasks educate learners about the kinds of challenges adults actually face, and the use of feedback is built into the process. In the real world, that’s how we learn and are assessed: on our ability to learn from results.
Good feedback and opportunities to use it are extremely important in this scenario.
In their seminal report Inside the Black Box: Raising Standards Through Classroom Assessment, British researchers Paul Black and Dylan Wiliam showed that improving the quality of classroom feedback offers the greatest performance gains of any single instructional approach. “Formative assessment is an essential component,” they wrote, and its development can raise standards of achievement. “We know of no other way of raising standards for which such a strong prima facie case can be made.”
This just makes sense. The more you teach without finding out who understands the information and who doesn’t, the greater the likelihood that only already-proficient students will succeed.
Much writing, particularly in online classes, can be busywork – posting discussions, responding to fellow students, and never receiving substantive feedback on your thinking. The discussions can be superficial and repetitive, instructors providing polite commentary. An 18 page research project can only benefit the student if there are multiple drafts, each receiving targeted feedback focused on the content of the course, critical thinking, problem solving, and original research. An 18 page book report doesn’t help prepare anyone for the 21st Century workplace especially if it is handed in during the last week of classes (exam week!) and never returned to the student.
So the simple metric is:
- What percent of a course grade is based on authentic work? Presentations, complex written work, research?
- What percent of this work receives feedback, on workforce-focused competencies, that is put to use in the course to demonstrate learning and progress?
Example Assignments and Grade Distribution:
In the above generic course syllabus, 15% of the grade would be considered workforce aligned. The Final Draft of the Research Paper is submitted at the end of the course and students will not use feedback on it to improve their work, so it does not count towards workforce readiness. Discussion posts are graded with a simple rubric – no real feedback opportunity. Notice, too, that this course does not include any teamwork.
We might make the measurement a little deeper by detailing which of the three QA Commons categories are addressed:
In business KPIs are well established – Key Performance Indicators. If a Provost wanted these numbers for all of her university’s courses she could have the report within a week, then review quarterly by college, program, and course to measure progress. Audit a statistically valid set of courses and assignments to verify that real feedback is happening, focused on the right skills. The current numbers would not matter. Only progress against them – which deans and provosts should be measured against.
We can dig into how this work could get done – peer tutors, more professional teachers (non-tenured, full-time), higher standards for existing assignments and feedback processes. We could also dig into why institutions should measure and start to change – obviously the intrinsic joy of change or even a serious increase in student employability will not be enough to do the trick. Change like this, or some other idea to fill the same sorts of gaps, will only come through crisis. It might be one beleaguered and undercapitalized institution turning things around, to great acclaim, through attention to employers and employability. Or BMGF/CZI funding broad research. Or U.S. News & World Report updating their algorithms to capture this sort of data, forcing a re-stacking of institutional rankings. Or a dozen widely known schools going out of business. A Southern New Hampshire sort of moment, across a broad swath of US Higher Education, will take a major jolt.
Or, as we’ll see in a follow up post, there may be some existential challenges to higher education’s viability in the US that could force change. Schools can only do so much to help students help themselves. What they can control are costs and curriculum and great teaching – and should be measured against their ability to deliver.