What’s fundamentally missing from most evaluation work, but absolutely makes or breaks the quality and value of what we do?
Find out from this fun and informative podcast. Michael Scriven and Jane Davidson talk about evaluation-specific methodology and why it’s so critically important. A sneak preview of their workshops in Dublin (at EES) and
Read the whole post –> Podcast! Michael Scriven and Jane Davidson on Evaluation-Specific Methodology
“Guys, you all got on a train and you didn’t check the destination. You just checked the question of what it was first going to get to, which was program eval, and then there were little hamlets along the way, like personnel eval and so on. But actually, this train is going to follow what the definition of evaluation is in the dictionary because that’s what it chose to call itself. And I have news for you. It’s got some pretty remarkable places where it’s going to stop and you are going to have to show your pass. …”
Read the whole post –> Michael Scriven on the Evaluation Train you boarded
Getting the definition of evaluation right is not simply a matter of having a popularity vote about it.
The fact that so many don’t see a clear difference between evaluation and other pursuits (such as research, monitoring, audit, organization development, management consulting) doesn’t mean that there isn’t one.
I just couldn’t resist commenting on
Read the whole post –> What is evaluation? Getting clarity about who we are as a profession, and a discipline
Time after time in online discussion groups I see questions like this one:
“What are the best tools to measure the effectiveness of [insert any program, policy, or initiative]?”
It’s a classic case of thinking evaluation is merely measurement, and measurement gives you the answers.
Many managers and non-evaluators think like this – that
Read the whole post –> Why “What’s the best tool to measure the effectiveness of X?” is totally the wrong question