Posted by: Jane Davidson
Looking for an easy-to-grasp and much more compact alternative to rubrics? Try a minirubric!
A minirubric is a cross between a rating scale and a short rubric.
Hot tip #1: These aren’t an alternative to careful evaluative reasoning informed by the right mix of evidence, but (like full-size rubrics) they can be very useful to help get you there by focusing the discussion within the evaluation team and/or with stakeholders. Plus, they can make reports far easier to read and understand.
I developed a bunch of minirubrics recently for a participatory evaluation I was facilitating. The participating stakeholders were all interested in discussing the evidence and talking about what was good and what wasn’t so good. What I needed was to push them one more step so we could get to the evaluative interpretation of evidence. Not just talking about the strengths and weaknesses of things, but actually saying how good/bad/strong/weak the results were on balance, so that we could discuss that.
Hot tip #2: Create different minirubrics for different kinds of evaluation questions. For example, I developed a different minirubric for each of three high-level evaluation questions – one for evaluating the design and implementation of different program components; one for evaluating how good each of the outcomes was; and one for drawing conclusions about overall value.
Evaluating design and implementation:
Which parts of the [program] were the most informative, engaging and impactful*?
* “informative” = provided teens with useful insights they didn’t already have
“engaging” = presented in a way that got and held teens’ attention
“impactful” = positively influenced thinking, beliefs and/or intention to make safer choices
Hot tip #3: Make sure stakeholders are clear this isn’t just an ‘opinionfest’, and don’t fall into the trap of simply averaging the responses. The reason for asking them to make a rating is so that we can discuss the basis on which they came to that conclusion, including evidence and reasoning. Only after intensive evaluative deliberation together – guided by an evaluation specialist asking the tough questions and making sure the reasoning is sound – is an overall conclusion drawn.
How well did the [program] provide teens with the knowledge and skills needed to make safer choices, and influence their attitudes, beliefs and intentions about safe and legal travel in cars?
Hot tip #4: Don’t forget, in order to call anything an ‘outcome’ you must show at least some evidence of a causal link. Need a cheap and simple way of doing that? Try building causation right into your survey or interview items.
Hot tip #5: People often think Michael Scriven’s definition of evaluation as “the determination of merit, worth, or significance” applies only to the overall program, policy, project, etc. Not true; you should be saying how good each one of your key outcomes is – as well as your program components, above). That’s what you need in order to step back and say how worthwhile the whole program (etc) was.
Hot tip #6: Use an even skinnier version of the minirubric to summarize your results across multiple findings in a readable way, e.g.
[Naturally, each of the ratings is backed by evidence later on in the report; this is just a short-hand way of summarizing some of the findings.]
Evaluating overall value:
How worthwhile was the [program] as an investment of time, effort and money to influence teens to make safer choices?
Hot tip #7: Your stakeholders generally only put effort into certain parts of the program, so the best thing to ask most of them is not the overall value of the whole program, but whether the results they’ve seen made it worth the effort they expended to achieve them, compared to what else they would be spending their time on. Use these stakeholder-generated ‘worth’ ratings alongside the rest of your evidence (e.g. Value for Investment analysis) to draw an overall conclusion about the value of the program. In your synthesis, give greater weight to whichever is more important in the grand scheme of things.
Want to learn more about evaluative rubrics?
Check out these resources:
- Evaluation Rubrics podcast: Jane Davidson interviewed on Adventures in Evaluation!
- Cool ideas from personnel evaluation: Evaluative rubrics
- Evaluation-Specific Methodology: The methodologies that are distinctive to evaluation
- Rubrics on the BetterEvaluation site
- Jane’s AEA eStudy webinar, 1st week of Dec 2014 (eStudy 051)
- Evaluation coaching to help you develop and use rubrics