Earlier this week I posted about the uninterpretability of the standard Likert scale that asks people to agree or disagree with a statement. I suggested a more evaluative scale that is more interpretable, particularly for survey items that go after process evaluation.
Now, let’s look at survey items for outcome evaluation and a few
Read the whole post –> Building causation into survey items about outcomes
A recent conversation with a colleague has reminded me of how traditional social science training has managed to hardwire our brains into some default thinking that needs to be questioned.
Obviously, there are a lot of places one could go with this as an opening statement, but for now, let’s look at the design
Read the whole post –> Breaking out of the Likert scale trap
Earlier in the week, I passed on a quote from a review of Ziliak and McCloskey’s (2008) book The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives asserting that:
… many researchers are so obsessed with statistical significance that they neglect to ask themselves whether the detected discrepancies
Read the whole post –> How good is a “good” outcome?
With apologies to all for our little bit of downtime over the weekend while we changed servers …
Here’s an interesting snippet that came through on a listserv recently from industrial/organizational psychologist Paul Barrett, who spotted a recent review from Olle Häggström of Ziliak and McCloskey’s (2008) book The Cult of Statistical Significance: How
Read the whole post –> Sizeless Science?
The comments shared in response to the earlier post, Culturally Competent Needs Assessment By An “Outsider” raise issues that are critical to the discipline of evaluation. Two things come to mind; a) reflections on how we define evaluation theory, and practice within the context of culture; b) the role of values and valuing in
Read the whole post –> The importance of values for substantiating evaluative conclusions