Is This Discipline a Science: Assessment centres

This is a question I have asked myself a lot over the years. Science has clearly been used as the foundation to give the discipline an air of respectability. Science is the validity information that everyone requests. Science is the check point everyone uses to determine whether a measure has been constructed correctly. Science is the on-going basis that we use determine the continued relevance of a proposed psychological tool or solution.

But how much of this discipline is really underpinned by science? In other blogs, I have discussed the issue of ipsative testing. This was, in part, to show that it is often not science but marketing that drives the discipline. These ‘gimmicks’ are used to position and sell products independent of science. Moreover, practitioners often go for the most palatable option independent of what is actually scientific.

An example is the use of multi-method measurement used to measure competencies in assessment centres. The idea that these measures will neatly cluster into groups has long been disputed. I quote my colleague Duncan Jackson (from another forum) on this matter:

“I agree that ACs can provide useful information, however, I think that the value of this information becomes masked when we start trying to use competencies (aka dimensions) scored *across* exercises. It has long been suggested that scoring dimensions in this manner is not the best way to use AC data (since Sackett & Dreher, 1982), yet, in practice, it continues to be the norm…there have been a number of studies that have applied mixed dimension AND exercise factors (also referred to as ‘Frankencentres’). The problem here is a) whether the dimension scores, in and of themselves, are internally consistent, b) whether the mixed model is parsimonious, and c) whether dimensions add explanatory variance over-and-above exercises”.

The literature is clear that most assessment centres do not measure what they purport to. This does not mean that it cannot be done as I have seen data from a leadership consultancy here in the UK that does indicate that it is possible if well designed. The fact remains that for most practitioners, mis-measurement in assessment centres is the norm. Science, it appears, remains the poor cousin to commercial drivers.

Baron, H. & Janman, K. (1996). Fairness in the assessment centre. In C. L. Cooper & I. T. Robertson (Eds.), International Review of Industrial and Organizational Psychology, Vol. 11, (pp. 61 – 113). Chichester: John Wiley and Sons.

Lievens, F. & Klimoski, R. J. (2001). Understanding the assessment center process: Where are we now? In C. L. Cooper & I. T. Robertson (Eds.), International Review of Industrial and Organizational Psychology, Vol. 16, (pp. 245-286). Chichester: John Wiley and Sons.

Lievens’ research tends to be focused on traditional ACs. Alternatives have been suggested because of construct validity problems. For a critique and study of this heavily debated issue see:

Lance, C. E., Lambert, T. A., Gewin, A. G., Lievens, F., & Conway, J. M. (2004). Revised estimates of dimension and exercise variance components in assessment center post exercise dimension ratings. Journal of Applied Psychology, 89, 377-385.

Advertisements

Please Comment!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s