‘Minirubrics’ – 7 hot tips for using this cool tool to focus evaluative conversations

Posted by: Jane Davidson

howgood

Looking for an easy-to-grasp and much more compact alternative to rubrics? Try a minirubric!

A minirubric is a cross between a rating scale and a short rubric.

Hot tip #1: These aren’t an alternative to careful evaluative reasoning informed by the right mix of evidence, but (like full-size rubrics) they can be very useful to help get you there by focusing the discussion within the evaluation team and/or with stakeholders. Plus, they can make reports far easier to read and understand.

I developed a bunch of minirubrics recently for a participatory evaluation I was facilitating. The participating stakeholders were all interested in discussing the evidence and talking about what was good and what wasn’t so good. What I needed was to push them one more step so we could get to the evaluative interpretation of evidence. Not just talking about the strengths and weaknesses of things, but actually saying how good/bad/strong/weak the results were on balance, so that we could discuss that.

Hot tip #2: Create different minirubrics for different kinds of evaluation questions. For example, I developed a different minirubric for each of three high-level evaluation questions – one for evaluating the design and implementation of different program components; one for evaluating how good each of the outcomes was; and one for drawing conclusions about overall value.

 

Evaluating design and implementation:

Which parts of the [program] were the most informative, engaging and impactful*?

component-minirubric

* “informative” = provided teens with useful insights they didn’t already have
“engaging” = presented in a way that got and held teens’ attention
“impactful” = positively influenced thinking, beliefs and/or intention to make safer choices

Hot tip #3: Make sure stakeholders are clear this isn’t just an ‘opinionfest’, and don’t fall into the trap of simply averaging the responses. The reason for asking them to make a rating is so that we can discuss the basis on which they came to that conclusion, including evidence and reasoning. Only after intensive evaluative deliberation together – guided by an evaluation specialist asking the tough questions and making sure the reasoning is sound – is an overall conclusion drawn.

 

Evaluating outcomes:

How well did the [program] provide teens with the knowledge and skills needed to make safer choices, and influence their attitudes, beliefs and intentions about safe and legal travel in cars?

outcome-minirubric

Hot tip #4: Don’t forget, in order to call anything an ‘outcome’ you must show at least some evidence of a causal link. Need a cheap and simple way of doing that? Try building causation right into your survey or interview items.

Hot tip #5: People often think Michael Scriven’s definition of evaluation as “the determination of merit, worth, or significance” applies only to the overall program, policy, project, etc. Not true; you should be saying how good each one of your key outcomes is – as well as your program components, above). That’s what you need in order to step back and say how worthwhile the whole program (etc) was.

Hot tip #6: Use an even skinnier version of the minirubric to summarize your results across multiple findings in a readable way, e.g.

rating snapshot

[Naturally, each of the ratings is backed by evidence later on in the report; this is just a short-hand way of summarizing some of the findings.]

 

Evaluating overall value:

How worthwhile was the [program] as an investment of time, effort and money to influence teens to make safer choices?

worth-minirubric

Hot tip #7: Your stakeholders generally only put effort into certain parts of the program, so the best thing to ask most of them is not the overall value of the whole program, but whether the results they’ve seen made it worth the effort they expended to achieve them, compared to what else they would be spending their time on. Use these stakeholder-generated ‘worth’ ratings alongside the rest of your evidence (e.g. Value for Investment analysis) to draw an overall conclusion about the value of the program. In your synthesis, give greater weight to whichever is more important in the grand scheme of things.

 

Want to learn more about evaluative rubrics?

Check out these resources:

Actionable coverActionable cover - FrenchActionable cover - Spanish (kindle)

 

2 comments to ‘Minirubrics’ – 7 hot tips for using this cool tool to focus evaluative conversations

  • Jane, I love this! Perfect timing as I’m right in the middle of a “how good is good” conversation with my staff. We’ll use a minirubric tomorrow to start the development of full rubrics: one for deciding what counts as valid data, and another for what counts as an opportunity worthy of action. I heart rubrics. The key to mixed methods evaluations. Love love love. Thank you!

  • Nice job! I find these mini-rubrics a beautiful example of concise and focused evaluation thinking. In empowerment evaluation, we ask folks to rate on a 1 (low) to 10 (high) scale how well a facet of the program or community initiative is doing. However, just as in the examples above, we do not let it spiral into an opinion fest. We ask for evidence for the ratings. This contributes to the construction of a community of learners built on a culture of evidence. I think these mini-rubrics do the same thing. They are fantastic – clear and easy to use. They are also, dare I say, good examples of actionable evaluation. Thanks for sharing.