Don’t expect quantitative evidence to answer a qualitative evaluation question

A while back I authored a post called Breaking out of the Likert scale trap in which I suggested that, for evaluation work, we might consider transforming more descriptive survey items like this one …

To what extent do you agree or disagree with the following:

strongly
disagree

disagree neutral agree strongly
agree
The course was well organized 1 2 3 4 5

 

… into more explicitly evaluative items, like this:

How would you rate the following:

poor /
inadequate

barely
adequate
good very
good
excellent
How well the course was organized 1 2 3 4 5

Now, I’m not suggesting I’ve found the perfect survey item here; just that the latter format I find to lend itself more easily to evaluative interpretation. [For more detail on how and why, plus some important caveats, please see the Breaking out of the Likert scale trap post.]

 

A common criticism of quantitative survey items

A later discussion on this topic in one of the LinkedIn groups drew the following comment (which struck me as a fairly typical criticism of quantitative methods in general):

“I still would not be able to interpret the responses because I do not know what is meant by “organized.” Did respondents mean that it was poorly organized because they did not get information about the training sufficiently ahead? Did they say poorly organized because there was too much information for each section of some parts and not enough for others given the time schedule? Did they say poorly organized because there were problems with the on-site registration? Did they say poorly organized because the visuals and the verbal did not sync? Did they say poorly organized because the presenters did not know how to use the technology? Did they say poorly organized because …? “

Now, correct me if I’ve missed something, but to me that’s a question of whether the evaluation questions you have lend themselves to a quantitative survey item in the first place.

If that’s the level of detail and feedback you need on one relatively narrow aspect (how well organized something is), then why not just ask an open-ended question? And, is a survey the right approach anyway? If in-depth probing is important, maybe a focus group or interview would be a better approach?

A quantitative scale like this is designed to give a quick snapshot summary of one aspect of performance, not an in-depth insight into it.

 

How important is it to dig into the ‘why’?

The comment is quite correct that the item doesn’t in itself provide the “why”. I usually ask why in the next question after the scale item – but only if that level of detail is going to be useful and warranted in the context of the intended uses of the evaluation.

As in all data collection, we need to make decisions about what’s most important to collect from respondents, because we can only ask them so much before they either throw the feedback form out the window or don’t answer things seriously.

So, if I went into great detail on the “why” behind how well organized something was, I would be giving up on getting actionable answers to something else – like whether the content was relevant, interesting, at the right level for them; whether they found something they would now go away and apply; what didn’t make sense at all; what else they’d like to see included; what would have made it all more worthwhile; etc etc.

The big question for me is, what matters most, what warrants going deep to understand it. We can’t go deep on everything.

It’s also about giving respondents a chance to comment in more depth on the things that were important TO THEM.

 

Striking a balance between a ‘quick look’ and some depth

Obviously, everyone’s work is different, but when evaluating workshops (and many programs) I usually find the best balance is, roughly:

  • some quick quantitative items to give a snapshot overview of how it was for them (and to remind them of the main criteria they should be considering when evaluating something like a workshop)
  • optional comment boxes after groups of questions (but not every single question), where they can expand if they wish
  • an overall “was it worth it” evaluative rating question
  • one or two open-ended questions that ask what it was that had the greatest impact on the value they got out of it (positive or negative). If it was the disorganization, they will say so here.

 

How do YOU decide what aspects of performance to dive deep on and which elements warrant only the high-level snapshot? Click through to the post to add a comment.

 

Bonus question: Why didn’t I include an “N/A or Don’t Know” option on this item?

Hint: Boxers or briefs? Why having a favorite response scale makes no sense

1 comment to Don’t expect quantitative evidence to answer a qualitative evaluation question

  • Jane,

    Sometimes “areas of improvement”, new initiatives, or unfamiliar territory get more in depth with qualitative methods(focus groups, etc.) in my practice. To cover some bases(not sure of the correct term) I’ve started following up with surveys after qualitative data collection to help with triangulation.

    I’m wondering what the value is in gathering any data via open ended questions on surveys in some cases where there isn’t any analysis done on the qualitative data?

    If the open ended responses are shared with someone who can make changes or learn from the data in some way great, but at times I question open ended questions on the tools like the one above if there is not time or efforts made to do some form of analysis on that data as well.

    Bonus: Not a scholar…but the people taking the survey should be attendees of the course ideally so the survey should speak to their participation and their insight on the organization of the course. Therefore n/a would be n/a for them! (do I get the prize?!)

    Sidebar: Are the “get GENUINE with me” pens for sale? :)