Free webinar on measurement, risk and uncertainty

One of the important features of genuine evaluation is appropriate measurement, including dealing with uncertainty, as I was reminded by Chris Coryn of the Evaluation Center at Western Michigan University, in our discussions at the International Summer School on Public Policy Evaluation Research last week.

A free webinar on 16 September 10.30am – 11.30am CDT by Doug Hubbbard, “Measuring Risk – what doesn’t work and what does”, promises to address issues of appropriate measurement when dealing with intangibles and uncertainty.

Doug Hubbard, author of ‘How to Measure Anything: Finding the value of intangibles in business”, has developed the approach of “Applied Information Economics”, which promised to thoughtfully address uncertainty in developing estimates.

According to its Wikipedia entry:

AIE differs in several ways from other popular methods of decision analysis:

Unlike the accounting-style business case or cost benefit analysis, it does not rely entirely on point estimates of uncertain values. Since it uses the Monte Carlo method, uncertainty can be modeled explicitly.

Has anyone read the book? Or will attend the webinar? (As it is scheduled for 1.30am Friday Melbourne time, I don’t think I will be there).

Is it a useful new approach to measuring the hard-to-measure or an oversell on quantification? Is it only applicable to business or does it have applications in the government and not-for-profit sectors? Are the criticisms discussed in some reviews valid? Are there other texts on this issue that are better?

The webinar will focus on:

· The Problem – Why your method may be a ‘management placebo’ and why that is the biggest risk you have

· Problems that many methods ignore – and problems some methods introduce

· What Does Work – Studies reveal some methods show consistent, measurable improvements on the forecasts

· Examples of Real Improvements

· Overview of Applied Information Economics (AIE) Process

· Common Objections to quantitative methods and the misconceptions behind them

Webinar registration is at

4 comments to Free webinar on measurement, risk and uncertainty

  • Please read a good question & answers about measurements problems from a helpdesk:


    Critique of Governance Assessment Applications: Identify the key literature that critiques the use and application of governance assessments.

    Helpdesk response

    Key findings: Governance assessments are based on subjective indicators (or measures), objective indicators or a combination of the two, known as composite indicators. Composite indicators are the most popular and are used by international organisations, donors, investors and the media (Arndt, 2008). Of these the most popular seems to be the World Bank’s World Governance Indicators (WGIs).

    Transparency International’s Corruption Perceptions Index (CPI) and the World Bank/International Finance Corporation’s Doing Business Indicators are also in common use. The main use of governance indicators by international organisation and donors is to incentivise developing nations to improve their governance and to improve the allocation of aid.
    There are numerous criticisms of the commonly employed measurements specific to types of assessments and assessments in general. There is debate over the sources used – whether sources are reliable, and how many and which sources provide the best measurements. A change in the mixture of sources used impacts conceptual and statistical precision. Another common criticism relates to the margin of error. This is often routinely ignored by those who use these measurements, such that cited differences between countries and between different times are, in fact, not statistically significant. Other criticisms are that measurements lack transparency, suffer from selection bias and do not help developing countries identify how to improve the quality of governance. At the same time there is a growing resistance by developing countries to indicators that are developed and used by ‘outsiders’. New forms of assessments are increasingly country-led, and in some cases continent-led such as the African Peer Review Mechanism (APRM).

    Full response:

    Date query received by the Helpdesk: 16 July 2010

    Enquirer’s contact details:

    DFID Politics and State Team

    This helpdesk come from: Governance and Social Development Resource Centre.

    The Governance and Social Development Resource Centre (GSDRC) provides cutting-edge knowledge services on demand and online. It aims to help reduce poverty by informing policy and practice in relation to governance, conflict and social development. The GSDRC is funded by the UK Department for International Development (DFID) and the Australian Agency for International Development (AusAID).

    More information at:



  • If you in Australia and don’t want to stay up so late, I will send you a recording. Just register and fail to show up

    If you would like to read a case study of AIE – Applied Information Economics Methodology for an Infrastructure IT Investment

    If you want to have the slides ahead of time to better prepare any questions you have for Doug Hubbard Measuring Risk – What Works and What Doesn’t

    I hope to see you on the 16th



  • Sue Street

    Outside a very few disciplines, measurement technology isn’t well understood. Notably, it’s not taught in some of the disciplines that are dominant in performance management and reporting – accounting and economics don’t teach measurement.
    The result is an inclination to expect measurements to be perfectly accurate, akin to financial accounts, and to reject information that can’t be measured perfectly accurately.
    This book is a very practical counter to this. It gives some simple, but powerful, principles, and builds on these with some useful rules of thumb. I’ve found the simplicity and clarity is great for passing on to people who don’t have a background in measurement.

  • Patricia Rogers

    Thanks for these links and suggestions – will follow up with interest.