My photo
B.Sc., M.Sc. Organizational Psychology

Wednesday, July 20, 2011

What are the challenges of conceptualizing and measuring job performance? Rater Bias revealed.

There are various ways to approach the question: What are the challenges of conceptualizing and measuring job performance?

Gatewood & Field (1998) distinguish between four sources of information which can be used to measure job performance:
  • Judgemental Data
  • Production Data
  • H/R Data
  • Training Proficiency

Judgemental data constitute subjective evaluations of performance by others – often a person’s immediate boss. By underlying the word “subjective” I identify a very challenging aspect of conceptualizing and measuring job performance.  Judgemental data are often derived from the Performance Appraisal Interview, implying a face-to-face discussion among the employee and his/her supervisor where the results of the job performance evaluation are discussed/ evaluated.

Unfortunately one of the most crucial problems/challenges in the performance measurent/ evaluation are the ones performing the measurements, from now on called the “raters.” L.N. Jewell and Marc Segal in their book Contemporary Industrial/Organizational Psychology state the following problems:

We will start by identifying two categories of rater bias
1)      Task-based rater bias: oversimplification of the appraisal/measurement task
a)      Rater’s strictness: tendency to confine appraisals to the lower end of the evaluation scale
b)      Rater’s Leniency: Tendency to confine appraisals to the upper end of the evaluation scale
c)       Rater’s Central Tendency :Tendency to confine appraisals to the center of the scale (p.216)
The central tendency is considered to be the more common kind of bias. There is a tendency for people to avoid giving evaluations which are either particularly good or particularly bad, and this constitutes the Central Tendency Effect.
2)      Ratee-based rater bias: not a response to the appraisal/measurement task but a response to the specific individuals being evaluated
Before proceeding with naming the rate-based rater bias I would like to make a brief reference to Viswesvaran et al.(1996) that examined the academic literature on job performance and compiled a comprehensive list of the various measures of performance. Further analysis showed that job performance, when rated across the nine dimensions, is dominated by a single factor rather than made up of various components. This General Factor accounts for 60% of the variance in job performance ratings (Viswesvaran et al.,2005), and it indicates that people tend to be good or poor at their job overall. This statement brings us to the Ratee-based rater bias:
a)      Halo error: the tendency to evaluate all of the behaviors or traits of one ratee/ employee in a manner that is consistent with a global impression or evaluation of that person (Jewel & Siegall p.217). Halo error may introduce either a positive or a negative bias into performance appraisal (p217). Halo error is interesting in that the tendency to make this error is predictable but the direction it takes is not! (p217).
Some authors have argued that this general factor we referred to previously is due largely or entirely to halo error (Holzbach, 1978; Landy, Vance & Barnes-Farrell,1982 as cited in Viswesvaran,Schmidt, Deniz ,2005) Is this really the case?
3) Other Sources of Rater Error
Feelings may get involved in the job evaluation procedure. As Jewel & Siegall state it has been stated in a study that: “raters who liked their ratees were found to be more lenient and to exhibit more halo error and less range restriction, than raters, with neutral feelings about the ratees (Tsui & Barry,1986). We cannot be sure though, and I believe that further research is needed to work on this aspect.

“Food” for thought!
Obviously the easier answer to all the previously mentioned errors would be: Let’s use multiple raters in order to reduce subjectivity and all the errors connected to it. But would this be the answer, or would it cause more conflict than expected? Could training the raters in an attempt to enhance objectivity constitute a possible answer? Would the use of Judgemental data always in combination with another kind of data reinforce objectivity? Is there a general Factor in Ratings of Job Performance after all? Lots remain for us to untangle…
Elena Maniatopoulou
References
Jewell L.N. and Siegall Marc, Contemporary Industrial/Organizational Psychology  2nd ed.West Publishing Co.1990

No comments:

Post a Comment

Archive