Learning outcomes are all the rage in the education industry. They’ve been so for some time on the accreditation and assessment side of things. But now they’re everywhere even among colleges and universities, especially among less prestigious institutions.

I’ve worked on both of these sides of the education industry. Sometimes even at the same time. What’s this ‘student learning outcomes’ stuff all about?

Here’s what I’ll do. I’ll start with a New York Times editorial. Molly Worthen paints a vivid picture of what learning outcomes look like from a faculty perspective. I’ll fill in some thoughts from the non-profit educational management and assessment side of things. That’s where I’m working these days, anyway.

The Learning Outcomes Market

First, Worthen focuses on colleges and universities. In other words, the postsecondary market. That’s fine, and she can focus on whatever she wants. But this is a small slice of a larger pie. The learning outcomes market concentrates on the K-12 educational space. And even once you move outside that space, there’s a growing workforce market.

The industry often thinks about postsecondary education as the ambiguous space between the two. Narrow rules and policies govern primary and secondary education, while industry trends and managerial whims govern the workforce. And those two spaces feed off one another. Postsecondary education? That’s what’s left.

And so, the assessment industry has postsecondary products. A few, anyway. But due to these market realities, they’re usually not big money makers. It’s not where the profits flow. In fact, many companies lose money on these products, especially accreditation-related products.

EdTech

Worthen mentions the tech industry in her NYT editorial, and she focuses on software companies. But, look, it’s often the same story. Tech is supposed to be new and hip, but it’s not (yet, anyway) raking in the profits in the postsecondary market. Worthen does mention a range of recent educational market interventions.

It’s possible the software companies make piles of cash on this down the road. But the educational technology (edtech) space looks like the startup space more broadly. They raise piles and piles of cash from investors. And then they probably lose money.

I don’t want to discuss in too much detail how Silicon Valley operates. That’s the topic for another time. But I’ll note here that edtech’s more about ideological goals than profits. Investors fund companies working to deskill and replace teachers. They’re content to see companies take losses on the way toward these goals.

Maybe Elizabeth Warren has a plan for that?

All kidding aside, the big cash flows to startups undermining stable, middle income jobs. That’s how the tech sector works, and it’s what Worthen’s ultimately getting at. And edtech’s targeting those K-12 teachers first and foremost, especially since they’ve got unions. Do colleges and universities feel some of the same pressures? Sure, but they’re not the primary target.

Accreditation

Second, Worthen lays out a couple of causes of the student learning outcomes trend. Why did educational institutions start pushing this stuff? She talks about accreditation pressure and political lobbying. I think she’s mostly correct about the former and mostly incorrect about the latter.

Accreditation’s a big deal. And so is presidential power. They’re connected. George W. Bush and Barack Obama pushed hard for broad accountability measures, including accreditation. Especially Obama, through Arne Duncan, his first education secretary. Bush and Obama moved the US toward a national education policy through measuring all that which can be measured. As well as a few things that can’t.

As a result, colleges now use instruments like the Collegiate Learning Assessment. But not without objections. The literature gives the CLA a bit of a beating. Most prominently from a book called Academically Adrift, which argues that colleges aren’t achieving the learning outcomes they set out to achieve. A philosopher named Kevin Possin added to this with a hard-hitting critique in the journal Informal Logic. The common theme of these criticisms is that they aim at the CLA’s construct validity, or the issue of whether an instrument measures what its makers say it does.

Lobbying

So, it’s less clear the industry achieves much via lobbying. Most major assessment companies are non-profits. While they’re pretty big, they’re not big enough to have the kind of lobbying clout found in the for-profit sectors.

A few for-profit educational companies do have that kind of clout. But when it comes to the student learning outcomes space, they’ve got bigger fish to fry. They’re looking at the K-12 textbook industry, college admissions, testing in Grades 3-8, test prep products, and so on. They’re busy with other things, and, as I said earlier, those things are bigger than the postsecondary market.

When you see lobbying in the postsecondary market, often the colleges and universities themselves do the lobbying. Many of them want to measure learning outcomes. Or, I should say, their administrations want to measure them.

The Future of Learning Outcomes Assessment

Third, Worthen hits pretty hard the use of ‘critical thinking’ as a buzzword. CLA also hits ‘critical thinking’ hard, as that’s what it claims it measures. There’s an active, if not thriving, critical thinking test market out there. You might consider the California Critical Thinking Skills Test. But, in the end this feels a bit outré. Edtech’s got other top priorities, even if college administrators and faculties care deeply about critical thinking or other constructs concerning deeper or reflective thought.

So what is it the tech and/or assessment industries want to talk about? They’re interested in things like ‘learning progressions’, ‘formative assessment’, and ‘tech-enhanced testing’, among other hot topics. And, more broadly, they’re getting at deeper marker forces here. They’re working their research and products into classrooms in more diverse ways requiring less time commitment on the part of educators and students.

Stealth Assessment

The assessment and edtech industries have different priorities. As far as I can tell, they’re looking hardest at various forms of ‘stealth assessment’. I mean ‘stealth’ here in a particularly robust sense: it can be built into college courses without requiring faculty to change anything they’re doing and without requiring any additional time commitment from students.

The extent to which this is a good thing or a bad thing depends very heavily on how colleges and universities use these ideas and products. To much of the industry, ‘stealth‘ is just code for ‘using video games in some way or another’. I find this rather uninteresting. More interesting versions, for both good and bad reasons, are ones assessing students implicitly through classroom activities they were already doing.

Image Source