Heads-uppiness: an important aspirational role for evaluation?

I’ve been doing some thinking about the various ways in which evaluation can, does and should have influence, in preparation for the forthcoming Australasian Evaluation Society conference on this topic.

That useful, modern resource, The Urban Dictionary, which has prompted some of our thinking about genuine evaluation and inspired our GE badge and jingle, has alerted me to a potentially useful concept for evaluation, created by ChristinaM33 Sep 22, 2008:

heads-uppiness
1. the quality of communication in which one gives a warning of impending action without necessarily giving all the details.
“I just wanted to voice my opinion, and support for better communication/inclusion/ heads-uppiness.”

Does this have relevance for evaluation, which tends to be looking backwards, or is it more for other types of activity which are focused on looking forward? Is it relevant for the “Now What” element of evaluation only? 

1 comment to Heads-uppiness: an important aspirational role for evaluation?

  • Kelci Price

    This puts me in mind of a dialogue I’ve been having with myself over the last few days regarding my frustration about having an evaluation career which often seems to consist of delivering exactly the same message to various program owners: a) the program was not well conceptualized from the beginning, so staff had difficulty enacting it coherently, and the various activities really didn’t support one another well, b) implementation was not monitored or regularly discussed, so there were major problems and implementation was not aligned with expectations, c) the program was not well integrated into the setting or aligned with organizational strategies, so it seemed like a stand-alone initiative divorced from everything else the participants (and the organization) were involved in, d) the program did not accomplish its goals (but given the goals it set and the activities it proposed to carry out to accomplish these goals, this was unlikely to ever happen anyway).

    It seems like an underutilized component of the evaluation process happens at the organizational level, involving some discussions about what a coherent strategy would look like to address a problem, what programs would fit into that strategy, and assumption testing of the proposed programs to see if they hold up to scrutiny in terms of their objectives and activities. Maybe if evaluators were able to work more at the macro-level in addition to the program level, we could help organizations with some “heads-upiness” in terms of common pitfalls around enacting programs, and help them critically assess, monitor, and evaluate their strategies with regards to the whole portfolio of their offerings (and with regards to the larger context in which they are working). This came up last year at AEA in terms of some foundations moving in this direction, and I think evaluators have the skills and mindset to be a great partner for organizations in this work.