Boxers or briefs? Why having a favorite response scale makes no sense

To what extent do you agree or disagree with the following?

“Boxers are more comfortable than briefs”

Strongly Agree [insert favorite response scale length/format] Strongly Disagree


(pics on Sodahead)

Time after time I see debates about whether response scales should have an even or odd number of response options (meaning, should there be a midpoint). And alongside this, we see the parallel debate about what that midpoint should be “neutral”, “N/A”, or “don’t know”.

 

N/A and Don’t Know are not the same as Neutral

Now, I would sincerely hope the survey designers would give me, as a woman, a N/A option because, heck, how would I know? And even for men, well, some of my kilt-wearing male ancestors from Scotland could have also been in the breezy “N/A” category too.

My wanting an N/A response option is completely different from, say, a man choosing the neutral point on the scale (meaning, boxers and briefs are about as comfortable as each other). The difference is, a man who has tried both boxers and briefs is making an informed judgement based on experience, whereas even a neutral point is completely meaningless for me. I just don’t know!

Not applicable or don’t know is NOT the same as a neutral position and should not go in the middle of the scale!

And, in this case (for boxers vs. briefs), a neutral opinion is reasonable, so should be an option.

Is there ever a case on an agree/disagree scale where a neutral position would make no sense and should not be included? Some argue that this is the case, so please chime in on the post page under Comments and tell us when you think this is.

 

Not all scales are based on two opposing endpoints

The other point worth noting is that not every survey item is based on a positive/negative, opposing views, or two-extremes concept.

Some scales used in evaluation reflect a more developmental frame (like, “where would you rate your skill level on X”) where a neutral point would make no sense.

 

Are X-point scales “simply the best”?

I am always amazed how some people seem to have chosen 4, 5, 6, 7, or however many points as superior for all purposes and taking that as some ‘religious’ position. Sure, there’s been some empirical research done on these, and that’s worth paying attention to, but it’s in the context of particular types of scales. And not much of it has been done specifically in the context of evaluation work.

There are many different forms of survey items for which different response formats make sense – longer scales, shorter scales, midpoint, no midpoint, N/A option vs. none, and so on.

 

Scale design is about conscious decisions with the respondent’s perspective in mind

For every quantitative survey item, we need to **consciously** consider (a) whether there is any such thing as a neutral position AND (b) whether it’s possible that the entire question is just not applicable to some respondents (or, that they just wouldn’t know enough to be able to rate it – like asking my octogenarian parents which is better, an Android smartphone or an iPhone).

Let’s not drive respondents nuts by forcing them to answer in ways that fundamentally make no sense for what they have to say.

Or force them to make something up when they actually have nothing to say on the topic.

 

Related posts

 

Comments? Please visit the post page to add your views!

9 comments to Boxers or briefs? Why having a favorite response scale makes no sense

  • Molly Engle

    The time I think a neutral point is not needed, make no sense is when the researcher/evaluator wants to know the persons opinion; if they have no opinion, then the appropriate response is N/A, or in the case of boxer/brief from a woman’s perspective a D/K is appropriate. Providing those options is important.

    A visual analog scale is a scale where an individual rates her/him self with a specific point, usually from knowing a lot about something to knowing little/nothing about something. The nice thing about visual analog scales is that they can actually be interval data, not ordinal…:)

  • Thanks for the great article, Jane; it was concise, helpful, informative and interesting.

    Another scale that drives me crazy is with respect to the “met expectations” question; i.e. did this meet, not meet or exceed your expectations. If I have prior experience with something or high expectations, and I put “met expectations”, that has the potential of reflecting poorly on something good (because it met my high expectations), or alternatively, reflecting positively on something poor (because it wasn’t as bad as I expected, but it still wasn’t good). I always hope for a comments box where I can qualify my answer.

    Dee Ann

  • Steve Lange

    My personal opinion on “neutral” options is that if you are truly trying to obtain actionable data, then neutral does not help you. Now, if you don’t care that your respondents don’t care, then I guess neutral is OK – but then why bother asking people’s opinions if you are going to give them the option to “opt-out” and put a neutral position? For example, if you are trying to decide on what action to take on some training or service you are providing and over half your audience is “neutral,” then what do you do with that information?

  • Rire Scotney

    Hello Jane. Thank you for this. This is very useful for those of us who are not evaluators but who need to apply evaluation techniques, including doing mini evaluations, from time to time. I recently had to put together a feedback questionnaire for a series of development workshops. I used a four point rating scale that I thought would more or less hold for all questions: To what extent [insert the question...eg to what extent did receiving the workbook before the workshop contribute to your motivation to attend?] The scale we used was: 1. NOT AT AL, 2. TO SOME EXTENT, 3. TO A GREAT EXTENT, 4. COMPLETELY.

    So you can see it’s not going to work sensibly form a linguistic or evaluative angle for all questions. Can somebody really be “Completely” motivated for anything and how can you genuinely attribute this to receiving a workbook?

    We allowed space for comment after each question.

    Many things happened.

    1. We got a great deal of generally useful information and were able to distil broad trends and themes.
    2. Many participants altered the rating scale to add in betweens.Hard to include these in the final tally.
    3. We could see that participants were interpreting the questions differently.
    4. The comments were vital in helping us to understand the ratings. However, in the final tally up there was minimal opportunity to maintain the association between the comment and the rating so the meanings of the ratings diluted. (I went through the comments to 14 questions on about 80 forms and wrote them unto themes and was able to maintain some association with the participants’ intent that way.)

    The biggest learning for me was to be very clear why we were asking the question: what were we going to do with the information? I think we asked a few questions that, on reflection, we probably didn’t need to.

    The second biggest learning was the amount of work involved in framing questions with minimal ambiguity so that we could have formed reliable conclusions from the info. Related to this is the need to somehow maintain the relationship between the rating scale and any comments that are sought.

    Luckily themes were all we really needed but I can see how these exercises can become a waste of time very easily and the cost of that waste is so much more than actual time.

  • David Onder

    In response to Dee Ann Benard, I usually ask an additional question to qualify the importance of a particular item (or in your case, the initial level of expectation). That helps me to better judge the response to the outcome. David

  • Scott Bayley

    The proper format and functioning of a response scale is ultimately a matter that can be (and should be) empirically verified. For an illustration of how this can be done using Rasch measurement theory see: Bayley, S. 2001, ‘Measuring Customer Satisfaction’, Evaluation Journal of Australasia, vol. 1, no. 1, pp. 8-17.

  • Patricia Rogers

    Unfortunately the excellent archive of the Evaluation Journal of Australasia at http://www.aes.asn.au/publications/ does not include Vol 1 No. 1 of the journal. Scott, do you have an e-copy you can point people to?

  • Scott Bayley

    If anyone would like a pdf copy of my 2001 paper (referred to above) they can email me at: Scottbayley56@yahoo.com.au

    regards
    -Scott Bayley

  • To David Onger – Thanks for that; few people seem to do that (ask an additional question to qualify the importance of a particular item).

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>