The media and evaluation reporting – clueless or unscrupulous?

Patricia Rogers, in her recent post entitled Does the recent evaluation show that Head Start doesn’t work? asks the question:

Why do these reports  summarize the findings inaccurately? Is it deliberate misrespresentation or error?  Is it just too hard to include variations in the results in a brief summary, or are the reporters not sufficiently skilled? Or do the reporters judge that the results are not broad enough across the domains, or  not large enough?

I’d like to add to this question/discussion with a few observations about the media and its reporting of not just evaluation findings, but even just simple evaluative concepts.

Here in New Zealand, the government has moved to introduce National Standards for literacy and numeracy in primary schools. These are basically descriptions of what children should be able do/read/write/understand in order to be able to adequately access the rest of the curriculum at that level. Teachers are to use a range of appropriate assessment tools (as they presumably do already) to gauge where each child is at, and – this is the new piece – are required to clearly report to parents/families whether their child is performing at, above, below, or well below the National Standards in literacy and numeracy.

Just last week (at the time this post was originally written, in Feb 2010), New Zealand’s most watched news program (TVOne 6pm news) ran an item that began by telling us how terribly confused parents were about the new National Standards – implying, of course, that they were going enlighten everyone and help them understand the basics. Right …take a look at this 3-minute news item: Controversial new national standards introduced.

The National Standards are a criterion-referenced approach, i.e., performance (in literacy and numeracy) is evaluated against – as the title implies - “standards” (i.e., where  the child NEEDS to be).

How does TVOne news explain them? As a norm-referenced approach. The reporter states that the Standards will tell parents whether their child is at, above, or below the “national average” (i.e., where the child is relative to other peers). WRONG!!

The same news program has repeatedly said that children are going to be “ranked” - illustrating exactly the same total lack of understanding a of very very fundamental evaluation concept.

Most lay people can grasp the difference between grading/rating and ranking, so what’s wrong with the media?

My own observations lead me to believe that reporters and those controlling the quality of mainstream news media reports are, for the most part, woefully lacking in critical thinking ability, have a completely inadequate grasp of even the most basic evaluation concepts, and seem to have no sense of responsibility about getting a sound understanding before going to air and ‘educating’ the public.

The reporting of actual evaluation findings is a bit more complicated, as Patricia explains in her post. My hunch is that much of the blatantly inaccurate reporting we see is a combination of:

  • ignorance of key evaluative concepts,
  • a lack of critical thinking ability,
  • laziness (e.g. just reporting what others have said instead of actually reading the original sources),
  • cherry-picking the newsworthy (read: ratings-boosting) snippets at the expense of a more complete representation, and
  • pressures to dumb down the news into tabloid-sized snippets that the public don’t have to work too hard to take in.

Of course, I’m mostly referring to ‘mainstream’ TV here; the print media and some radio stations often do a much better job (and thank goodness someone does), but IMHO the bulk of it is still well short of good enough, genuine reporting. And, of course, I’m not watching media from all over the world (mostly New Zealand, but we get some Australian, U.S. and English channels here) – how do your local media outlets compare?

I suppose the big question for me is, what can we DO about this? Here are a few ideas that have crossed my mind, but I’d very much like to hear others’ views:

  • Teach critical thinking in schools and in higher education.
  • Teach some basic evaluative thinking too – it’s a life skill; it can save you a fortunate as you go through life making decisions about what to purchase, invest in, etc
  • Help evaluators turn the nub of their findings into easy-to-grasp sound bites that the media could pick up and use as they are – maybe all evaluation reports should come with a set of press releases, some very short, some a little longer?
  • As citizens and consumers of the news, keep criticizing the dumbed down news that is fed to the masses.
  • Perhaps professional evaluation associations should consider commenting publicly on the misrepresentation of evaluative concepts and evaluation findings.

2 comments to The media and evaluation reporting – clueless or unscrupulous?

  • Sue Street

    I’m not so sure that once the numbers appear they won’t be used in the ways that are being reported. It’s how the results are understood by the media, and I’d be pretty amazed if it’s not exactly how the results will be reported. This already happens with NCEA results.

    For me, the bigger issues with National Standards are the rushed implementation, what does a rushed development of measures do for the quality of the assessment? And of course the biggie is – so what. Given that kid is currently assessed, with no apparent intervention resulting, I see no reason to think that changing the assessment measure is going to do that.

  • Sue, thanks for your post.

    My own take, too, is that the media will most certainly pick up the publicly reported results, report them with their usual complete lack of understanding of evaluation and assessment, draw unjustified inferences about schools, etc etc – UNLESS someone in the meantime comes up with a smarter way of reporting that will actually give some more useful insights. I see from the recent article in the NZ Herald that John Hattie is working on a system to allow comparisons of same-decile schools (those in communities of roughly the same socioeconomic level) – as is done in Australia’s MySchool system.

    As for rushed development of assessment measures, I hope someone will correct me if I’m wrong, but the Standards don’t (as far as I can see – I have a copy of the reading and writing standards on my desk) include any specific measures or assessment tools per se, just broad-brush qualitative descriptions of what a student at a particular year level should be able to do/read/write/understand.

    My understanding is that it was a pretty full-on busy process to develop these descriptions in the timeframe; whether one uses the label “rushed” or not seems to depend on one’s opinion of the content.

    I’m no assessment expert, so I can’t comment on how sound the content of the Standards is, but as a lay person and parent of a 5-year-old who just started school, my initial reaction was “What? You mean this didn’t exist already?” There are some other guides around, such as the Literacy Learning Progressions (LLP). When I compare these with the Standards, they both give a roughly similar checklist of what children should be able to do at each level; the Standards are (to me) more explicit about how my child should be reading and writing, i.e. it’s clearer to me what I should be looking for as we read together.

    I completely agree with you that there’s a need to evaluate the effects of the Standards – not just the document, but the required reporting to parents and other audiences – on teaching, learning, achievement, and other outcomes. There’s an implicit theory of change that the government/the Ministry of Education has; the teacher unions disagree and have an alternate theory of how things are likely to pan out; other key commentators have other predictions. Who’s right, and on which bits? How will it pan out in practice? Who will it help, and how? Who will it not help, and why? Who will it harm, and how/why? Will it be worth the investment of taxpayer dollars and teacher time (and the opportunity costs that go along with that)? How does it compare with the alternatives, including the status quo in 2009? All very interesting questions, well worth answering in direct, actionable ways.