Some thoughts from Day 1 of the AEA conference

A couple of highlights from the opening sessions of the American Evaluation Association’s annual conference (being held this year in sunny San Antonio, Texas):

Opening plenary

Three views on evaluation quality, the theme of this year’s conference, from Eleanor Chelimsky, Laura Leviton and Michael Patton.

Eleanor argued for appreciation of three different types of evaluation, each of which has built in different notions of quality – evaluation for accountability, knowledge building or management for development. For example, evaluator independence, which is usually seen as essential in quality evaluation for accountability would not be sought for evaluation for management for development. Immediate use would be be seen as essential for management for development, but not for knowledge building.

Laura discussed different types of generalizable learning that evaluations can support – including what she referred to as ‘small theories of improvement’, and distinguishing between adaptations and then further amendments.

Michael discussed the importance of the quality of the analytic process in achieving rigor, referring particularly to the rigor attribute model, being used to assess quality in intelligence information analysis. This pays attention to a number of strategies, including considering multiple hypotheses, doing sensitivity analysis, and synthesizing data rather than simply reporting it. Sounds like the sort of common sense that has been missing from the rubrics for assessing quality that have looked only and formulaically at the type of research design.

Posters and presentations

Continuing the theme, among many interesting posters, Aaron Pannone and Walter Heinecke, from the University of Virginia, presented a poster on ‘Metaphors We Evaluate By: Randomized Controlled Trials and the Definition of Evaluands“, investigating how poorly the so-called “medical model of evaluation” (usually taken to refer to focusing primarily or solely on RCTs) actually represents the suite of designs used at different stages (and of course often ignoring the problems of misrepresentation and data corruption that have been uncovered in many clinical trials).

Andrea Johnston presented a skill-building workshop on the Waawiyeyaa (Circular) Evaluation Tool (previously featured on the AEA 365 blog), which uses story telling, domains of change and a circular process of rebirth and transformation to develop personal stories of change.

Comments are closed.