How NOT to evaluate proposals

Jane Davidson here. [You can tell on the GE site which of us is posting, but unfortunately not on the email feed. Please hit reply if you know how to do this on Feedburner!]

Pile of ProposalsHave you ever been struck by the irony of clearly invalid methods used to evaluate proposals for evaluation work?

I find myself shaking my head at one such RfP as we speak. I won’t mention the commissioning organization, but I find it just staggering. And this is not at all uncommon.

Here are the criteria, which perhaps seem sensible enough to the evaluatively untrained eye …

  • Description of methods (10%)
  • Performance measures and targets (5%)
  • Professional expertise (10%)
  • Previous experience in evaluation (10%)
  • Quality assurance (5%)
  • Service priorities (can complete within timeframe) (10%)
  • Price (50%)

As is often typical, there is no budget range given, so it’s anyone’s guess whether the organization is after a Rolls Royce/Cadillac or a bicycle/moped-scale evaluation. It’s not clear how price will be evaluated, but one has to assume that “cheaper is better”.

Perhaps the part that frightens me the most (on behalf of the taxpayer who will be funding this and the recipients whose lives depend on the quality of the program) are the low weights given to things that should (a) be weighted more heavily because they are surely the main point of the work and (b) have ‘bars’ (minima) in addition to weights. A ‘bar’ is a minimum level you must pass in order to be considered at all.

When only numerical weights are used, this means that very poor performance on one or more low-weighted criteria could, in theory, be compensated for by good performance on a highly weighted one.

Let me rephrase the above criteria as plain language questions to help illustrate the problem here:

  • How sound are your ideas for doing this entire piece of work? (10%)
  • How good are the measures you plan to use? (5%)
  • Do you have the expertise to do a decent job? (10%)
  • Have you ever actually done an evaluation before? (10%)
  • How do you make sure you are doing a good job? (5%)
  • Have you actually got time to do this work? (10%)
  • Are you cheap? (50%)

In theory, a very cheap proposal from a bidder with no actual evaluation experience and a flimsy evaluation plan will score way ahead of an experienced evaluation team with a sound plan that, carefully costed, will require a higher budget.

Note that adjusting the weighting will only solve part of the problem; ‘bars’ (minima) are critical here. If the proposed approach is clearly inadequate; if the team simply doesn’t have the expertise to do the job; or if they have never actually done a single evaluation before, then these proposals should be excluded outright.

So, how do you synthesize performances on multiple criteria?

This is a fairly substantial topic, but here are the hottest leads:

For more ideas on commissioning genuine evaluation:

3 comments to How NOT to evaluate proposals

  • Pete McMillen

    Hi Jane,

    I am thrilled to intuit that your mana keeps growing exponentially. No, really! Many in the business of evaluation would risk their livelihood sharing provocative, but spot on, sentiments like this in public. I’m reminded of last year’s AES keynote by Trish Greenhalgh whose reputation (and income) was I guess unsullied by right royally provoking none less than Her Majesty’s National Health Service. Link to prez: http://www.aes.asn.au/images/stories/files/conferences/2012/presentations/Wednesday/WedHallC930Greenhalgh.pdf

    Alas, mere mortals like many of us are less likely to stay in business by criticising evaluation commissioners, irrespective of how daft their selection processes and criteria.

    So on a more serious note, it seems quite overdue that evaluators collectively and formally confront this nonsense, rather than individually and independently. Especially government commissioning which is the main culprit. Given fiscal restraint the order of the day – at least until the mad scramble to squander taxpayers’ hard earned dollars at the end of the financial year, I believe timely to respectfully offer the hands that feed us (government and as necessary other commissioners) some guidance in efficient and effective evaluation contracting. Surely taxpayers and evaluators alike have little to lose?

    Disclaimer: Pete is actually employed by central government, and until recently was involved in a fair volume of research and evaluation procurement – inefficiency and opacity aside. He has since moved away from evaluation and onto to more mundane data-heavy science and innovation information, monitoring and reporting systems.

  • You are making a critical point that for some reason is hard to get across. Hiring an organization that gave you a cheap price with low qualifications or a unsatisfactory past performance can be worse than not hiring anyone at all. So many times, cheap proposals become expensive implementations. Or unfinished failures. Or useless evaluation reports on the shelf that everyone tries to forget.

    In health quality improvement, people worry that it will cost too much to worry about quality, it will be time consuming, so better just to get the services pushed out fast to “reach people.” Of course, “reaching people” with bad health services might mean you kill more of them than you would if you worried about quality, or that you will be ineffective, and waste the resources you have to deliver these services.

    So, to those who say “how can we afford to worry about quality” (whether it is on quality of a proposal or a service), I join others who respond: “How can we afford not to?”

  • Dean Adam

    Hi Peter, I’m in central government also. I appreciate some of the frustration evaluators, researchers and others in contracted advisory roles often face working with bureaucrats and bureaucracies. However, I’d encourage an approach of joint education and working for change.

    I don’t think its unreasonable to assume that there are as many (or at least some!) agents for change within an organisation as there are without.

    I believe that there is a reasonable appreciation in government that contracted experts are brought in for their expertise; and its a waste of time and money not to listen to them. I also agree with a lot of Jane’s thoughts around poor processes to get good, or relevant expertise.

    On the flip side, I’ve seen lots of colleagues frustrated by being seen as an afterthought within their own process. Evaluators who don’t work with funders as partners – do not always even see the funders and project managers as relevant stakeholders in the project, do not understand the constraints or environment that the project, evaluation or funders are working within. Evaluators who (cough) don’t really understand evaluation, or consider their role to be as advocates for a program or issue.

    There are heaps of things funders can do to improve this relationship and support evaluators be able to make more useful contributions. Beginning the relationship earlier in the intervention design and implementation process for starters.

    None of this is happening in a vacuum, or are particularly silent issues.

    Sir Peter Gluckman has written a couple of times on Government’s use of evidence in policy development. Scott and Evan’s from ANZSOG have recently presented a phase one report on Australian and NZ capability to implement evidence based policy. I think these papers jointly recognise both an ideal end point, but also some of the barriers and challenges researchers, evaluators, and policy agencies need to be considering as we work towards that ideal.

    The Families Commission Amendment Bill is, amongst other things, proposing a lead role for the Commission in the contracting of research and evaluation for Government.

    I think its an interesting, albeit busy and sometimes congested, space for evaluation and policy at the moment.

    I was rereading the WHO document ‘Changing Mindsets’ over the weekend. There was a section in there that struck me as the same thing we’re talking about, and has been discussed for…sometimes it feels like forever (although I’m not that old!).

    “In the most effective decision making environments, all relevant parties – researchers, decision-makers and other stakeholders, including civil society actors, work together as interdependent allies in an environment of mutual trust and respect. This enables major decision to be based on a solid foundation of evidence and benefit from a broad range of inputs.

    More commonly, however, researchers and decision-makers are divided by a gulf of misunderstanding, unaware of the added value they could bring by working together in a collective unit.”

    To my mind that over states the problem. I don’t know many researchers, evaluators or bureaucrats who don’t agree the benefit and value of collaboration or the importance of evidence in informing policy. I do think we often under estimate the challenges in collaborating or the degree of our differences (whether its our ideologies, drivers and motives, knowledge, or organisaitonal cultures).

    I like to think that we’re starting to chip away at some of these issues and agree that its important to have people like Jane who are both vocal in helping to highlight the problems, but also actively proposing and creating solutions that we can try out.

    keep the faith! ;)

    ka kite ano e hoa.

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>