Academically Adrift Redux: Countering the Memes #3 -- Focusing on Student Learning

Academically Adrift Redux: Countering the Memes #3 -- Focusing on Student Learning

The best way to counter the notion that students don’t learn much in college is to get better at figuring out what they do learn.

Of the three destructive memes which the Academically Adrift study has progagated -- students spend much less time studying in college than they used to, college has become less demanding, and that they don’t learn much while in college -- this last one is the most promising and useful. Don’t get me wrong -- it’s just as specious as the others -- but it gives us the most to work with.

On one level, the notion that students don’t learn much while in college is the easiest one to knock down, which is what makes blanket shorthand statements in AA such as "while students do not learn much on average" (p.114) so annoying and inaccurate. Even though AA tacitly admits that Collegiate Learning Assessment (CLA) scores don’t measure the totality of learning by noting the differences in results by field of study (p.104-08), the study insists that the CLA accurately measures critical thinking, complex reasoning, and writing skills (pp.108-09). Figure 4.9 on p.118 graphically captures the essence of AA: all arrows point to CLA scores.

By trying to fit all things learning into its CLA performance box, everything that Academically Adrift touches turns to dust, whether it's high expectations (pp.93-96), analysis of time spent learning (pp.97-99), or analysis of demographic achievement gaps (e.g., pp.111-114). AA even has a good list of learning improvement strategies (p.129-134), but won't endorse them unless they fit neatly in the CLA box. This, along with AA‘s dogged insistence on the ‘20/40 rule’ (writing a 20-page paper and 40 pages of reading per week in a course) seals its fate as substantively irrelevant for the most part. (BTW, is that 40 book pages or web pages? based on what book size? font size? content type? 40 pages of Clancy or Nietzsche?)

Perceptual irrelevance is an entirely different matter, however. We’ve seen how Academically Adrift’s memes continue to influence opinion leaders and the public, in part because its principal mechanism -- reducing learning to a small set of test score numbers -- retains an almost irresistable widespread appeal and is fiercely defended by many. So, countering the AA memes means confronting the powerful mythology of standardized tests to a large degree.

Beyond that battle, however, there is a deeper concern. NYT columnist David Brooks’s recent article may have taken a tortuous road to get there, but it ended up with some reasonable questions: ‘How much do students learn at a particular college? How do we know?’ I agree with many observers that higher education institutions are not very good at answering these questions and need to get better at it. It’s especially important to learn to answer these questions productively because there are already plenty of destructive ideas floating about on how to do this.

For example: AA noted that there is not enough scientific knowledge on learning outcomes to justify high-stakes coercive accountability' (p.141), and Brooks says that it’s probably not appropriate to based federal funding on learning outcomes at this point. You see where this is heading, I hope: if there was enough “scientific knowledge” on learning outcomes, then it would be OK to impose high-stakes coercive accountability and base federal funding on learning outcomes? The word “ominous” doesn’t begin to describe these proposed directions.

So it’s important to get better at figuring out what students learn.. For example, while AA opposes "coercive accountability" schemes (for the moment, at least), it supports "evidence-based," quasi-experimental research (p.136-41). We need to do much better than that. More specifically, we need to be much broader than that: expand the range of acceptable evidence (can you say “action research”?), expand the range of acceptable outcomes (can you say “Bill Gates” or “Steve Jobs”?), focus on learning instead of legacy system artifacts such as reflexive sorting (e.g., getting away from automatically assuming that higher grades must mean lower standards rather than more effective teaching).

From its subtitle (“limited learning on college campuses”) to its CLA-centered chart, Academically Adrift obscures these issues rather than illuminating them. Whenever the issues of student learning or college rigor arise, plan on spending extra time and effort having to counter the memes. Well, at least AA gives us a wall to push off of and get a running start. But we can bypass a lot of this by simply going straight to the heart of the issue: focus on getting better at figuring out how and what students do learn.