Is a review just a quick and dirty (or clean) evaluation?

Same thing with a different label?

Pic from SilentMode Drew Maughan via flickr. Creative Commons

Evaluation is plagued by inconsistent terminology.  Often different words are often used for the same thing and the same words are used to mean different things.

(Periodically discussions erupt on EVALTALK about the difference between research and evaluation.  Increasingly posters are referred to the archives - a treasure trove of discussions and examples).

And so it is with review.  For some people, the term can be used interchangeably with evaluation. Or assessment.  For others, there are important differences.  But there are different differences. And what does this mean for genuine evaluation?

Review as a briefer investigation

The Treasury Board of Canada Secretariat, in its previous guidebook defined review in terms of its scope and timeframe:

2.6.3 Reviews are often conducted in response to a pressing or immediate need of management. As such, the emphasis is usually on quick generation of sufficient information to inform decision making or reassure senior management of the dimensions of a problem or situation. The methodology used to gather information is usually secondary to developing an adequate answer in a timely fashion (i.e., evaluation or audit protocols and approaches are not adhered to). Although they are useful to address targeted issues, reviews or special studies do not conform to external reporting requirements, project control processes, or standards which delineate a discipline such as audit or evaluation.

Now while this makes sense, it is not consistently used in this way.  

Review for current programs

The European Commission has used review for existing programs, saving the term for proposals:

Evaluation work requires the experts to examine (i.e. peer review) proposals for funding against published criteria and provide comments and recommendations to the Commission.

Reviewing work involves assisting the Commission’s project officers by supervising the progress of ongoing RTD projects already funded by the European Commission.

A rose by a less scary name

Amin Malik suggested that the choice of word was made to avoid negative connotations of a particular term:

Calling evaluative exercises reviews is a just a way to avoid calling them evaluations. There is usually a tendency to avoid giving evaluations their true name, for political, resource-related reasons or just for fear – evaluation anxiety. What is included in the Canadian quote would form a significant part of the evaluation and as long as the exercise involves a decision on the merits of the program, specially if this is compared to similiar and different contexts; an evaluation of its processes, outcomes and resource allocation and utilization – it can be called an evaluation. I have seen governments, UN agencies, NGOs and not for profit agencies avoiding using the name evaluation in situations that perfectly fit the name, and instead calling them assessments or reviews. When you look at what is being done, they should have been called evaluations.

Words matter.  The Society for Organizational Learning chose to use the term “assessment of organizational learning” because “evaluation” was seen to have a negative connotation of being, well, often negative and focused on the performance of the individual.  Interestingly, for me it was the other way around – assessment was used for individual performance, and evaluation for products, policies, projects and programs. 

(And as for audit!  Early in my university career, someone came up to me at the start of a new course and told me they were planning to audit the classes.  I understood this meant I was about to be subject to a surprise and apparently arbitrary evaluation of my lecturing performance.  It was many weeks later that I realized they were using the term to refer to attending classes without enrolling for academic credit.)

 Quick and clean?

If ‘review’ is widely, but not universally, understood to mean a constrained evaluation, done at a particular point in time , for a particular purpose, drawing primarily or exclusively on existing data, how many of the principles for good evaluation still apply?

Is it possible to move above a  “quick and dirty” overview to perhaps a “quick and clean” overview?

Can the quality of review be improved by providing frameworks and templates – or does it come down to having an experienced evaluator with content knowledge who can quickly sift and synthesize existing data and quickly fill critical gaps with expert observation and targeted interviews?

2 comments to Is a review just a quick and dirty (or clean) evaluation?

  • I’ve never heard of a review Jane, but the associated tasks are very familiar! Thanks

  • Jane Davidson

    Patricia posted this one, Karen :)

    It’s really interesting what meanings people infer from the term “review” depending on context.

    I think Amin is right that sometimes it’s used to mean evaluation when people don’t want to use the “e” word. We see the term assessment used in much the same way.

    I’ve mentioned this terminology question to colleagues here who work in government, and they think “review” is actually more ominous than evaluation.

    When something disastrous or very bad happens (deaths, major security breach, someone completely unsuitable is hired into an important position and their past is exposed, or something else negative and pressworthy), there is an instant call for an “urgent review of the procedures or safety regulations or competence” of some organization.

    So, here, “review” is often taken to mean a deep and incisive investigation into why something went horribly wrong.

    As a result, people are much more afraid of a review than an evaluation.

    On the other hand, we do also see the word review used to refer to what Patricia describes.

    When an organization commissions a “review” rather than an “evaluation” they are generally looking for:

    1. a relatively quick analysis rather than a long-term impact study
    2. more of a connoisseurial (expert judgement) approach than actually getting into documenting outcomes and backing it with extensive evidence
    3. highly experienced consultants whose credibility and expertise will not be questioned, BUT whose expertise is in what “effective management” or “good practice” looks like rather than in evaluation (or review) methodology.

    In general, if the RFP calls for a “review” I don’t bid on it. The use of this terminology tells me that the client is not aware there is such a thing as evaluation expertise, and would therefore not value it when reviewing proposals.

    It’s the old fallacy that all you need in order to be able to evaluate X is the experience of having done X yourself.

    In other words, only teachers can evaluate the quality teaching or educational initiatives (and, they don’t need any serious additional expertise beyond their teaching experience and expertise in student assessment).