It’s that time of the year when many of our colleagues in academia are snowed under with papers to grade. Not fun!
To help inject a little humor into this task, how about playing a little Paper Grading Bingo with colleagues? [This is also a great one for TAs (teaching assistants) who have grading
Read the whole post –> The Friday Funny: Paper Grading Bingo
There’s a unique and extremely challenging barrier to singing the ‘no value-free’ parts of the genuine evaluation song in a higher education (a.k.a. tertiary education) setting.
And that’s what Michael Scriven calls the value-free doctrine.
Last week I delivered the opening keynote at the Self Assessment for Quality conference for tertiary (=higher) education organizations
Read the whole post –> Pushing sand uphill with a pointy stick? ‘No value-free’ in higher ed evaluation
Developing good performance indicators is not easy. The history of their use is littered with examples of how they can produce a distorted picture of performance and provide dysfunctional incentives. Burt Perrin’s report to the OECD (Organization for Economic Co-operation and Development) Implementing the vision – addressing challenges to results-focused management
Read the whole post –> Punished for productivity – poor use of an average in performance evaluation
Many thanks to Michael Quinn Patton for sending us through this gem (from the New York Times) about a rather interesting essay exam for selecting graduate students into All Souls College in Oxford, England.
Read the whole post –> Oxford admissions essay: “simple, yet devilish” … An evaluation aptitude test?
In the medical profession in particular, there are some very rigid beliefs about what constitutes good enough “evidence of effectiveness” to justify offering, recommending, allowing patients to try, or even just not vehemently opposing a particular type of treatment for a patient. There are some glimmers of hope in other sectors (e.g. in the Best Evidence Synthesis work here in New Zealand). But there are still three areas where there are very serious challenges in building a credible evidence base given the kinds of constraints and realities surrounding them. They are: (1) cutting-edge treatments; (2) treatments that are by their very nature tailored/individualized rather than standardized across patients or populations; and (3) learning what works for small sub-populations
Read the whole post –> What constitutes “evidence”? Implications for cutting-edge, tailored treatments, and small sub-populations