Well, this week is Evaluation Week at SIOP (the Society for Industrial & Organizational Psychology) and SIOP week on the AEA365 blog. So, it’s a good time to consider some of the synergies across the two disciplines.
My doctoral training was in organizational psychology (with some industrial psyc at Master’s level), which is the career I was pursuing when I stumbled across evaluation. I was stunned by the synergies between the two disciplines. And the disconnects!
Personnel evaluation includes performance appraisal, personnel selection (for hiring or promotion), assessment of potential (e.g. for succession planning or management development programs), and assessment of training and development needs (for current and/or future work).
Some of my early work in I/O psychology (and, not that I realised it at the time, but evaluation as well) was in “reviewing” a management-by-objectives (MBO) performance appraisal system, and later doing some work to try and improve it.
MBO is basically goal-based evaluation applied to performance appraisal, and it comes with many of the same problems. The incentive is for people to set objectives that sound terribly impressive but are actually easy to meet or exceed (sound familiar?), based on performance indicators that are specific, measurable, achievable, realistic, and timed (SMART).
Like many performance indicators we see in program, policy, and project evaluation, SMART indicators are generally quantitative, narrow, and sample just tiny elements of the wider performance domain. As a consequence, they are generally easy to manipulate.
The organization could see this problem and asked me to work with the right people to create something that really captured what was important about performance. We started off using a tool from I/O psyc, behaviorally anchored rating scales (BARS).
Although this was an improvement, it didn’t fit the requirements well enough because it still used relatively narrow examples of behavior (‘critical incidents’) as exemplary of a particular level of effective or ineffective performance. And, people found it confusing to use because the incidents aren’t positioned exactly on the rating points (see an example, from an article by Pounder).
What was needed was something that captured in a broad-brush way the essence of what excellent performance really was, and in a way that performance couldn’t be manipulated to look good unless it actually was good. So, this was my first experience at developing evaluative rubrics.
Here’s a sample, developed for senior managers …
Senior Management Objective: Contribution to the Management Team
Constructive contribution to Senior Management team to ensure enhancement of team operation and the achievement and success of the company’s business goals and strategic plan.
- Support of other team members, incl. being open and keeping team members informed.
- Praise and acknowledgement of team members, including giving and actively seeking constructive feedback.
- Active support of company direction and policy.
- Thorough preparation for Senior Management meetings.
- Proactively seeking to enhance team cohesion and morale.
|Rating||Description of Performance|
|excellence||All of the elements listed under “very good” and in addition, took an activist role in promoting the interdependence of the members of the Senior Management Team. Engaged the other members of the team, encouraging them to challenge each other, see each other’s points of view, and take a cooperative approach to decision making. Willingly took responsibility for balancing trade-offs, making tough choices, addressing conflicts between short-term and long-term goals, and thinking ahead continuously to identify competing imperatives.|
|All of the elements listed under “good” and in addition, actively supported and encouraged other team members to challenge ideas, consider their implications in the medium and long-term, and create innovative ideas and solutions to maximise company’s success in achieving its goals. Maintained and enhanced effective working relationships despite conflicting roles and differing viewpoints. Demonstrated an understanding of the business that transcended a Business Unit perspective, including a willingness to share the political costs of changes. Showed evidence of taking a leadership role in innovation, thinking through and addressing the implications of change, and actively seeking information rather than simply waiting for the formal presentation of ideas.|
|good (expected level of) performance||Prepared thoroughly for meetings and showed a clear understanding of issues and background material. Shared information honestly and openly, and raised issues/concerns in a timely manner; ensured no problems remained unresolved at the end of meetings. Constructively challenged others ideas, always providing workable alternatives; did not take offence when challenged. Actively supported company direction and policy; gave full and active support to and ownership of decisions once made. Represented and supported the agreed senior management team philosophy.|
|substandard performance||Made minimal contribution overall, often neglected to constructively challenge ideas of others, or reacted defensively when challenged. Focussed predominantly on own Business Unit; approach largely centred around protecting own area. Displayed minimal commitment to organisation changes; reluctant to assume true ownership of policy, projects and tasks|
|extremely poor performance||Frequently negative about others ideas, gave little or no constructive feedback for improvement to team members; ill-prepared for meetings; lack of contribution negatively impacted on team cohesion and morale; often defensive when challenged. Focussed exclusively on own Business Unit; gave little consideration to the direction & policy of co. as a whole. Resistant to change; showed little commitment after decisions were made.|
I later developed a range of these for several different organizations, covering all sorts of positions, including sales, internal HR consultants, personal assistants, clerical staff, accounting staff, and more.
What about some examples from policy or program evaluation?
For some examples of rubrics being used for policy evaluation, see the NZ Ministry of Education’s Measurable Gains Framework, designed to track performance and strategic outcomes for Ka Hikitia, the Maori (indigenous) education strategy. [Scroll to the bottom of that page to access the downloadable rubrics.]
These are also starting to be used in schools to facilitate evaluative conversations about what effectiveness for Maori learners looks like in practice.
And a spot of humor …
Some of you may get a giggle out of this hilarious application of behaviorally anchored rating scales (BARS), from Jeff McHenry in SIOP’s TIP publication: