Category Archives: Personality

The Adaptive Skills and Behaviours Required to Succeed in Future Work Environments

There is a lot being said about the future of work, and what this means for the type of skills, attitudes, and behaviours we will require to succeed.  With this future already upon us, it is important that we pick up our pace  of change, and look to build capability that helps us to adapt, thrive and succeed within an ever changing world.  Best selling author, Jacob Morgan, describes in his latest book ‘The Future of Work’ five trends shaping the future of work;

  1. New behaviours
  2. Technology
  3. Millennials
  4. Mobility
  5. Globalisation

These trends are bringing a dramatic shift in attitudes and ways of working; new behaviours, approaches, and workplace expectations.  Whilst many of us are sensing these rapid changes, we aren’t necessarily sure why these changes are happening, what they mean, or how they will impact us.

As Jacob Morgan says:

“The disruption of every industry is also causing a bit of unrest as people struggle to define where they fit or if they will be obsolete.  It’s forcing us to adapt and change to stay relevant while giving rise to new business models, new products, new companies, new behaviours, and new ways of simply existing in today’s world”.

So, the burning questions are:  what exactly do these changes look like for employees, managers, and organisations?  And, what skills, attitudes, and behaviours do we require to succeed?

What we do know is that modern employees are more self-directed, collaborative in their approach, and want to shape and define their own career paths instead of having them predefined for them.  They are continually seeking out learning opportunities that fit with their personal purpose and professional aspirations, and are looking for development opportunities that benefit them holistically as a ‘whole person’.  They seek the skills, confidence and healthy mind-set to challenge the status quo, to think on their feet, and to continually adapt within highly fluid and ever changing organisational environments.  They are looking to learn and develop emotional and social intelligence;  to work within increasingly networked communities;  to lead, collaborate, innovate and share.

Consistent with the above is five crucial behaviours, identified by Morgan, as being required by employees in the modern workplace;

  1. Self-Direction and Autonomy – to continually learn, and stay on top of important tasks within manager-less organisations
  2. Filter and Focus – to be able to manage the cognitive load associated with increasing amounts of pervasive information
  3. Embracing Change – to continually adapt to new working practices whilst demonstrating resilience and healthy mind-sets
  4. Comprehensive Communication Skills – to support collaborative work practices, and to communicate ideas and provide feedback succinctly
  5. Learning to Learn – to be willing to adopt a pro-learning mind-set; to step outside comfort zones, reflect, and make meaning of experiences.

Organisations also need to adapt to the future of work to support these trends and demands, and ensure they are attracting, developing, and retaining top talent.  A good place to start is by fostering and embracing the principles of organisational learning.  Peter Senge suggested in his book ‘The Fifth Discipline: The Art of the Learning Organisation’ that in order for an organisation to remain competitive within the complex and volatile business environments that we find ourselves operating they must build their capacity for continually transforming.  This involves developing cultures that;

  • Encourage and support employees in their pursuit of personal mastery (the discipline of continually clarifying and deepening our personal vision, and seeing reality objectively)
  • Encourage employees to challenge ingrained assumptions and mental models
  • Foster genuine commitment and enrolment through shared visions.

Here at OPRA we are developing a carefully selected set of best-of-breed, soft skill learning and development programmes to help individuals and organisations embrace these current and future trends. Our programmes are designed to equip professionals with the emotional intelligence, healthy thinking, learning agility, collaborative team behaviours, and motivation required to demonstrate exceptional performance within the modern workplace environment.  We have grounded our programmes on the principles of positive psychology, and an understanding that REAL learning and engagement only occurs when self-awareness, participation, and a tangible sense of progress are present. Therefore, and in light of this, all our programmes are designed to;

  • Develop self-insight and raise awareness of individual and collective strengths
  • Utilise proven research based content, delivered by expert and accredited practitioners
  • Provide access to on-going professional coaching opportunities to further deepen learning
  • Incorporate social learning methodologies to encourage and enable collaboration and sharing
  • Provide applied on-the-job challenges and reflection to embed and sustain behavioural changes.

Watch this space for further announcements about OPRA Develop over the coming months. In the meantime, if you would like to discuss how OPRA can support your learning and development with proven, researched based soft-skill development programmes, then please contact your local OPRA office:

Wellington: 04 499 2884 or Wellington@opragroup.com

Auckland: 09 358 3233 or Auckland@opragroup.com

Christchurch: 03 379 7377 or Christchurch@opragroup.com

Australia: +61 2 4044 0450 or support@beilbyopragroup.co.au

Singapore: +65 3152 5720 or Singapore@opragroup.com

Being Present During Feedback

During my career in the HR and L&D space, I could not put a figure on how many times I have given 360 degree feedback to managers, often as part of a leadership development programme. The 360 degree process is certainly not a new one. We know the demand for 360 degree surveys is growing and that this process is increasingly a part of our roles as HR and L&D professionals and leaders in business.

Despite my experience to date though, I am always amazed how much I continually learn about myself in these feedback situations. I know I am an extravert and I operate very intuitively. Whilst these are my strengths in giving feedback, they can also be my downfall. I need to always be mindful of how I am in a 360 degree feedback scenario, and adjust my style for the person receiving the feedback.

The top ten golden rules of feedback are always in our minds. Use specific examples, do not judge, choose the environment etc. However it is useful to remind ourselves of those other rules of behaviour that are obvious, yet sometimes easy to overlook. We must not forget that how you say things and how you phrase your message carries more weight than you think. You cannot escape your personality—but you can temper it.

I recently read some ideas around how to ‘BE’ in a debrief:

  • Be present and maintain self-awareness
  • Avoid value laden language, tone, body language and facial expressions
  • Avoid making interpretations
  • Avoid the use of closed and leading questions
  • Use open and probing questions to facilitate discussion of the results
  • Draw on the individuals context
  • Offer suggestions for improvement when invited to do so and you have them to offer
  • Call any mistakes you make and apologise

(Ref: Dr Ben Palmer, Director, Genos International)

These behaviours sound straightforward right? Yet it is not easy to always be this way when immersed in a feedback relationship with an individual. Our role, the purpose of the 360 degree feedback, the emotional response from the individual can all pull and lean us in directions we should not really go. The first point is the key: Be present. If we stick to that then the rest should fall into place.

Faking It

As I/O psychologists we are extremely reliant on the accuracy of the data that is presented to us. Decisions are made on the basis that what is presented is indeed factual and accurate. But how much data is ever cross-examined? How much ‘faith’ can we put in data that is conducted by test producers? What independent bodies ever scrutinise the data we are presented with?

These questions are surprisingly rarely asked and data is perhaps too often taken as ‘true’ without any deeper enquiry. A recent study by Daniele Fanelli (reviewed in the Economist June 6, 2009) brings into question the fidelity of scientific data noting, that enhancement to data and findings is far more common than people might think.

Fanelii conducted a meta-analysis of surveys investigating scientific honesty, analysing 18 studies on the topic. His findings indicate that while admission of outright fraud was low (2%) about 10% confessed to questionable practices such as ‘dropping data points’ or ‘failing to present data that contradicts one’s previous research’. Moreover 14% had seen colleagues falsify data and a whopping 46% noting they knew of colleagues involved in methodologies that were questionable.

What I found interesting about this study is that it relies on self-report. This would indicate that this represents merely the tip of the iceberg of questionable science. With respect to test publishers, where there is a vested interest in finding supporting results, it is anyone’s guess how much of the results we see represent real effects. I think the message is that all science should be examined with a critical mind and perverse incentives (whether for personal or commercial gain) should always be considered before placing too much substance on the results.

The Social Implications If Traits Exist

In a previous post I presented arguments for whether traits do exist at all. The leading proponent of this is the likes of Bob Hogan and the idea that personality tests assess attributes one ascribes to oneself. In a conversation on this topic with an academic from New Zealand, I discussed the issue which he in turn discussed in relation to cause and effect:

“Neuroticism needs to be x and y, because people tend to behave in a certain way across time and situations. What we need is not alchemy but Mendeleev’s period of elements. We need to know what is causing the various consistencies that we see. Psychometrics is modelling answers to statements and there seems to be some validity with this. However, we should not assume that the underlying construct (true score) has any meaning. The only way forward are genetic and neuroscience studies”.

I agree entirely with both the core question implied (cause) and the limitation of psychometrics to establish cause. This is a large part of the basis for arguing for a systems based approach to the way we approach any question that we have as I/O psychologists.

The larger societal question that I see is what happens once we establish cause. Society, and psychology, struggles enough with any degree of determinism with cognitive ability. If personality was to be openly discussed in the same way, the ramifications would be huge, not only for I/O but especially for criminal psychology. In this regard whether we establish cause or not, it becomes overridden by what people want, or rather are prepared, to believe. This brings me back to the question of the future of the science of personality. If the future is genetics and neuroscience, as I agree it must be, than that is going to have to be coupled with one heck of an education campaign to get society ready for what the results may be.

 The idea of the true self (which Hogan is bringing to the fore in the debate on personality tools) seems fundamental to our area and can perhaps best be described by paraphrasing that movie line from A Few Good Men: “The self. You want the self? You can’t handle the self!”

What Is Stopping The Changes Coming About: The Trouble With I/O

In response to my previous posts, people have asked what I see as the issues that are currently being faced by the I/O psychology discipline. I would say there are three interconnected issues that affect our discipline. The first two are internal and the second is external. My belief is that until we get our head fully around these issues and the impact that they are having, the discipline of I/O psychology will continue to fall short of its potential.

Issue 1: The difference between mass psychology and experienced/up-to-date practitioners

The I/O practice is now a discipline for the masses. It has moved from being the domain of a select few to one of the fastest growing disciplines in the world. With this, has come the proliferation of tools and theory (e.g. leadership) so that everyone can have an aid to making decisions on human behaviour. Practitioners of I/O psych (whether they be psychologists or HR professionals) simply do not have the time or skills to uncover unconscious drives, personal constructs, situational taxonomies, or the like. Nor do many have the inclination to continue to be well read on a discipline that is evolving quickly on the fringes. In terms of theory, getting into the unconscious and deterministic aspects of behaviour is something that people are just not ready for!

Issue 2: The academic system is not serving us well

I have commented on this many times before but never quite so bluntly. The reality is that the system itself has inherent failings. Firstly, what academics are reinforced for is often the antithesis of quality science. This is captured in the sycophantic, agreeable, and generally passive nature of most academics. Their research in the main is hardly ever ground-breaking but follows a set of agreed rules as to what constitutes ‘science’, and so the game continues.

From a teaching perspective, there is a pervasive incentive to get degrees for everyone. This is reflected in what and how it is taught and to whom.

Issue 3: The speed at which decisions must be made

Practitioners need to make decisions quickly in the current selection environment. Recruitment is a tough job: Making $150,000 decisions ($75,000*2, just counting salary not even getting into ROI) with limited information is fundamentally hard. Cognitive tools, identification of negative behavioural tendencies, etc. provide an aid to this process. The key role that personality plays for practitioners is that it provides a semantic code by which we can make decisions under difficult circumstances. Whether it is called personality or a behavioural cluster, the reality is that practitioners need tools to aid decisions and these tools need to work within the timeframe and paradigm in which practitioners work.

Response Style Indicators And The Concept Of Integrity

Two recent papers have questioned the assumption that validity scales in personality testing, such as social desirability, address inherent problems of self-report data. The argument is that the inclusion of a response bias indicator somehow provides a litmus test of the validity of a personality report. But like many assumptions when put to the test, this basis appears less robust.

McGrath, R., Mitchell, M., Kim, B.H., & Hough, L. (2010). Evidence for Response Bias as a source of error variance in applied assessment. Psychological Bulletin, 136(3), 450-470.

Abstract

“After 100 years of discussion, response bias remains a controversial topic in psychological measurement. The use of bias indicators in applied assessment is predicated on the assumptions that (a) response bias suppresses or moderates the criterion-related validity of substantive psychological indicators and (b) bias indicators are capable of detecting the presence of response bias. To test these assumptions, we reviewed literature comprising investigations in which bias indicators were evaluated as suppressors or moderators of the validity of other indicators. This review yielded only 41 studies across the contexts of personality assessment, workplace variables, emotional disorders, eligibility for disability, and forensic populations. In the first two contexts, there were enough studies to conclude that support for the use of bias indicators was weak. Evidence suggesting that random or careless responding may represent a biasing influence was noted, but this conclusion was based on a small set of studies. Several possible causes for failure to support the overall hypothesis were suggested, including poor validity of bias indicators, the extreme base rate of bias, and the adequacy of the criteria. In the other settings, the yield was too small to afford viable conclusions. Although the absence of a consensus could be used to justify continued use of bias indicators in such settings, false positives have their costs, including wasted effort and adverse impact. Despite many years of research, a sufficient justification for the use of bias indicators in applied settings remains elusive”.

This is not merely a theoretical question but one that has real practical implications, as captured in the conclusion of the paper:

“What is troubling about the failure to find consistent support for bias indicators is the extent to which they are regularly used in high-stakes circumstances, such as employee selection or hearings to evaluate competence to stand trial and sanity. If the identification of bias is considered essential, perhaps the best strategy would be to require convergence across multiple methods of assessment before it is appropriate to conclude that faking is occurring (Bender & Rogers, 2004; Franklin, Repasky, Thompson, Shelton, & Uddo, 2002).”

The failure of response style indicators has led researchers, such as Uziel (2010), to dispute that their interpretation should be better defined. The argument is that response style indicators are not in themselves a measure of the validity of the assessment, but have more to do with a person’s perceived impression management and self-control.

Uziel, L. (2010). Rethinking social desirability scales: From impression management to interpersonally oriented self-control. Perspectives on Psychological Science, 5(3), 243-262.

Abstract

“Social desirability (specifically, impression management) scales are widely used by researchers and practitioners to screen individuals who bias self-reports in a self-favoring manner. These scales also serve to identify individuals at risk for psychological and health problems. The present review explores the evidence with regard to the ability of these scales to achieve these objectives. In the first part of the review, I present six criteria to evaluate impression management scales and conclude that they are unsatisfactory as measures of response style. Next, I explore what individual differences in impression management scores actually do measure. I compare two approaches: a defensiveness approach, which argues that these scales measure defensiveness that stems from vulnerable self-esteem; and an adjustment approach, which suggests that impression management is associated with personal well-being and interpersonal adjustment. Data from a wide variety of fields including social behavior, affect and wellbeing, health, and job performance tend to favor the adjustment approach. Finally, I argue that scales measuring impression management should be redefined as measures of interpersonally oriented self-control that identify individuals who demonstrate high levels of self-control, especially in social contexts”.

It is my belief that the solution to this problem for I/O psychologists lies in the application of the measures. Impression scales tend to be used only in selection settings, and as such, it is rare that personality reports become a ‘decision-maker’. They are merely part of a body of evidence to describe an individual’s suitability for a role. Validity scales should simply provide a measure of how much weight one can put on the personality measure. Any interpretation over and above this is over stepping the mark. In the absence of a ‘valid’ personality report, one must rely more heavily on other sources of data like interviews and CV’s. It is not that people might intentionally misrepresent themselves, but we cannot be confident that a personal report is an accurate portrayal of character.

This same logic, when applied to covert measures of integrity (which at best are measures of conscientiousness), should not be used to screen individuals out of the selection process. With a covert test there are assumptions made as to what is being measured. The point-to-point correspondence (George and Smith) is one step removed. One must ‘infer’ that integrity is measured, that the measure is work related, and then define the construct. Hogan’s theoretical work on whether constructs even exist is very applicable here and the onus of proof is much more evident with a covert measure. I believe this is a problem given the high-stake nature of integrity testing.

Overt measures, such as the Stanton Survey of Integrity (SSI), don’t make such claims. The questions are overt and the construct is defined exactly as measured, such as rule breaking. The report even highlights the questions people answer that may be of concern, i.e. ‘this person has admitted to …’ It is a far smaller leap of logic to assume that those who admit more of these behaviours are higher risk than those that do not. This is supported by the various distributions for the SSI between law-abiding citizens and those who have broken the law.

Before looking at the SSI data, both in New Zealand and internationally, I was incredulous. However, the reality is that there is a good spread across the one scale measured and people do admit to a range of behaviours. Once that behaviour has been admitted, it is up an organisation to decide what to do. There is little inference that needs to be made as the respondent has provided the information directly as to what they would or wouldn’t do, and what behaviour they see as acceptable. This latter point is key to how overt measures, such as the SSI, work – they examine what behaviours a person has normalised. 

In conclusion, integrity measures and response style indicators share a common logic. They are both aimed at eliminating true negatives (not the identification of true positives as many researchers have assumed). What can be drawn from both measures is related to the overt nature of the questions. The more overt the questions are, the more justifiable the assumption is, that the respondent is presenting behaviours or a report that is undesirable. The more covert, the more the scale or measure can be used to determine the confidence level of the evidence and any further extrapolation from that point is unwarranted.

The Biology of Traits

I have posted before about the value, or potential lack of, personality testing. A more fundamental question that has been raised is the basis for traits in the first place.

As a foundation, we must look to neuroscience and genetics. This is not my area but I have provided a synopsis for readers of this blog from another forum posted by an academic from Victoria University, New Zealand (Dr. Ron Fisher):

“We have very good evidence now from twin studies that there is a large genetic component to personality scores (with estimates varying between 30 and 70% of the variance being due to genetic differences). The search for the genetic encoding of these differences has started. I do not think we will find simple mappings that follow Mendel’s laws but rather complex interactions between different alleles positioned on different chromosomes. However, these studies will eventually give us some clues about the complex interaction of genes and how they then lead to personality expressions.

The expanding neuroscience mapping also opens up interesting opportunities. We will never be able to read a person’s mind based on the activation of cells in the brain (since this violates Heisenberg’s principle of uncertainty – a colleague in physics pointed this out once). However, I predict we will be able to map inter-individual differences in activation in specific parts of the brain that are akin to traits”.

So where does this leave personality testing? I, like Ron, believe that psychometrics is a step towards understanding human behaviour (specifically, the consistent part of human behaviour). I believe that consistence will be described semantically by traits, and understanding more fully the behaviours that make up those traits is the real question for trait based personality theorists. To achieve this we need a far more collaborative approach as scientists and practitioners. We must not be swayed by excessive commercialisation that is the bug-bear of this industry.

To conclude, I will leave the last word to Dr. Fisher who had this to say in a post on another forum:

“In the near future, genetic mapping and findings from neuroscience will complement psychometric findings in our understanding of why certain people behave in certain ways. Whether these techniques will ever find their application in work settings, I am not sure. But it would give us a better answer to what lies underneath the currently observed clusters of items as found in factor analytical studies”.

The Big Business of Psychology

Many people are unaware of the big business that is now Organisational psychology. I have long argued when reporting the myths of psychology that one of the great myths is that it is a discipline driven by science. The reality however is that it is a discipline often under pinned by commercial interests.

I have known of many legal battles between psychometric testing companies as they try to monopolize market share. I would imagine for most companies their marketing spend dwarfs research and development. Very few if any modify their models based on science but modify their business on commercial grounds.

As the recession closes in (or begins to weaken depending on which country you are in) it is important to remember the recent history of the business of psychology, especially over the last few years. The business has been built on mergers, private equity, share offers, acquisitions, and takeovers, many of which turn out to be quite messy. As an example I draw attention to one of the more famous management breaks ups, that of SHL. This involved both parties jostling for position and resulted in commercial battles that continued for years after.

This is not a critique of anyone involved; it is the nature of the game. However, to quote Francis Bacon: ‘Nature to be commanded must be understood’ and for this reason all practioners and scientists in the area should be mindful of the often covert drivers of the industry:

From the Telegraph Business News (2002):

‘Further two directors go as SHL equilibrium tested’ By Alistair Osborne, Associate City Editor, 24 December 2002

‘SHL, the psychometric testing group, yesterday ousted two rebel directors from the board but at the price of discovering that more than 40% of investors did not support chairman Neville Bain or chief executive John Bateson.

At a shareholders meeting to settle the internecine warfare on the board, company founders Roger Holdsworth and Peter Saville were forced out of the company, respectively by 55.2% and 54.8% of votes cast.

They followed another non-executive director, David Arkless, the Manpower representative ousted last month, but who voted his company’s 7% stake with the rebels.

However, the rebels’ counter-motions to oust Mr Bain, the former Post Office boss, and Mr Bateson, gained a respective 40.9% and 41.6% of the votes. About 90% of shareholders voted. The meeting, from which Mr Bain banned the press, was at the City offices of SHL lawyer Barlow, Lyde & Gilbert.

Afterwards, Mr Holdsworth, 67, said that, despite the verdict: “We’re feeling relieved and a teeny bit inebriated – there’s been a lot of tension.”

He said that the vote was “not a ringing endorsement of the board”, adding that he and Mr Saville, who owns 11% of the shares, had been “overwhelmed by the support from virtually all SHL employee and ex-employee shareholders”. “We have at least demonstrated the fragility of the situation and that they have a serious communication problem with their staff.”

The four-times married Mr Holdsworth said he and Mr Saville, 57, would now “keep all our options open” over further shareholder action, while hinting they could start a rival business.

Mr Bain said he was “delighted” with the outcome. He denied that having over 40% of votes cast against him and Mr Bateson was a resigning issue. “Absolutely not,” he said. “What we have got is the full support of the institutions.” One institution, 3i, voted with the rebels.

Mr Bain said “I deeply regret they called this EGM, but we’ve had the vote and now we must get on with running the business.” He banned the press because “it was a private meeting”.

Mr Holdsworth denied the bust-up proved psychometric testing did not work, advocating “a more thorough test of all non-executive directors”.

And from The Guardian Business News:  By Simon Bowers, Tuesday, December 24, 2002

‘A shareholder row at SHL, the loss-making psychometric tests company, ended yesterday in a vote to expel the company’s two founders from the board. Peter Saville, Roger Holdsworth and a third non-executive director, David Arkless, were forced out after 55% of shareholders gave their backing to the current management.

Mr Saville and Mr Holdsworth had led a campaign to overthrow the management, blaming it for £7m of write-downs in just over a year.

The rebels called yesterday’s shareholder meeting, at London law firm Barlow Lyde & Gilbert, appealing to investors to remove Chairman Neville Bain and chief executive John Bateson.

“Of those who voted, 41% voted to remove Mr Bain and Mr Bateson,” they said after the meeting. “We have been overwhelmed by the support that we received from virtually all SHL employee and ex-employee shareholders.”

SHL’s management, under former Post Office chairman Mr Bain, claims the company’s recent write-downs were the result of poor management under Mr Holdsworth and Mr Saville, combined with a sharp deterioration in the recruitment sector.

They claim that rationalising internet operations and sacking a number of psychologists at overseas divisions was essential to a consolidation programme.

Rebel shareholders, who were backed by recruitment firm Manpower, which holds a 7% stake, insisted Mr Bateson had brought about a “flight of intellectual capital” and had focused the business on an unworkable internet model which was too expensive.

Yesterday’s vote, which took place behind closed doors, came at the end of a week of meetings between institutions and rival shareholder camps. In the end, the majority of institutional investors, including Hermes and Fidelity, gave their backing to present management’.

 

The business of psychology is now very big business. This I think should always be understood by practioners when evaluating providers in the industry and the claims of their solutions. I do wonder what Cattell would make of it all!

Stress Testing

In another recent edition of ‘The Economist’ a study was reviewed around stress. Anthony Porcellia and Mauricio Delgado at Rutgers looked at the financial risks people were likely to make when calm or stressed. Their findings explain much of the current economic crisis. The experiment involved students playing a gambling game. To stimulate stress, for part of the game half of the students had their main hand in very cold water. The results were described in terms of the difference between the analytical brain and the intuitive brain. The analytical brain is easily disrupted by outside stimuli, such as stress. The psychologists found that exposure to stress led participants to choose riskier decisions when trying to decide between taking a major or minor loss. The reverse was true with gains. Traders were in very unknown territory during the financial crisis and were therefore more likely to take riskier decisions to avoid loss. Thus, much of the explanation for the financial crisis may lie not in economic policy but in the psychology of the money traders in times of the unknown.
This I believe is what makes I/O Psychology such a fascinating discipline in that it is so multi-faceted. The problems that we solve as I/O Psychologists have a real world application and invariably involve a combined approach across disciplines to solve some of the great business and economic problems of our time.

Ipsative Tests: Psychometric Properties

In this final blog I want to look at the psychometric properties of ipsative measures and also look at the supporting evidence for ipsative tests.

Psychometric properties
As most of our readers are HR practitioners not statisticians I will try to make the psychometric critique relatively brief. However, the psychometric weaknesses of ipsative testing are well reviewed and for those interested I strongly suggest a thorough read of Meade (2004). In essence the critiques are at both a factor structure level as well as the corollary of reliability of measurement.

The factor analysing of data using an ipsative tool is more complex. The way that it was done in Saville and Wilson’s article (1991) was IMO artificial and to quote Barrett: “This (thier) finding completely invalidates Saville and Willson’s (1991) and, by extension, Cronbach’s contention that a factor analysis can be reasonably implemented on ipsative data by simply dropping one score. The interpretation of factor analysis depends entirely on the weights of the variables after regression onto a number of underlying traits. Thus, unless the focus of a factor analysis was simply to determine the amount of variance accounted for by each factor, this procedure is quite insupportable. The choice of which scale to drop will dramatically affect the interpretation of the factor solution”.

In short ipsative data does not lend itself well to factor analysis. Factor analysis in turn is the basis for which we determine construct validity (i.e. the basis for understanding the psychological phenomena we are hoping to measure). As a result it is not surprising that the reliability of ipsative scales has consistently been shown to be lower than that of normative scales.

In reference to a famous article entitled Spurious and Spurious: The Use of Ipsative Personality Tests Johnson, Wood, and Blinkhorn (1988) re-stated the arguments for the abandonment of ipsative testing via questionnaire on psychometric grounds, and provided some empirical examples of the error-prone consequences of their use. This article was, perhaps, the strongest indictment of ipsative measurement until the more recent paper by Meade 2004.

Moreover Hough and Ones (2001) make the issues very clear. The key issue is not reliability and factor analysis or even what an ipsative test correlates with. You may be able to reliably produce results from ipsative questionnaires, but they are WITHIN PERSON RANKS thus as soon as you compare two people’s results you are treading on dangerous ground. Between people comparisons, are necessary for selection when you have more than one candidate.

The Rebuttal
All of the rebuttals (to my knowledge) on ipsative testing for use in selection come from one company, SHL. This is not surprising given that SHL have developed tests which they hope to sell for selection that are ipsative. Their line of reason, as is often the case, is based on a good story, that the tests are equally as valid and difficult to fake.

Despite a lack of independent support, direct criticism, and a recent top-class paper using SHL data (Meade, 2004) it would be a miss not to tackle the points raised by Dave Bartram (SHL Director of Research) directly. In essence they are based on the main premise that the key difference is the number of scales. This has been critiqued thoroughly by Paul Barrett and much of what is cited below comes from direct posting and conversation between myself and Paul. The first key defense of ipsative testing was published by Dave Bartram in 1996, in his pre-SHL Director of Research role as Professor at the University of Hull (unfortunately after Sean Hammond’s and my conference paper was given in January 1996). The paper reference and abstract is: Bartram, D. (1996) The relationship between ipsatized and normative measures of personality. Journal of Occupational & Organizational Psychology. Vol 69(1), Mar 1996, pp. 25-39.

Abstract: Presents a general expression for computing the relationships between normative scales and ipsative ones derived from them, based on the number of scales and the intercorrelations between the normative scales. The results obtained from various empirical and computer generated data sets were compared with those expected on the basis of the equations and a close correspondence was found. Expressions for computing the reliability of ipsatized scales and the reliability of ipsatized scale differences were also produced and the implications of these for profile analysis are discussed. It is noted that ipsatized measures are unreliable when the number of scales is less than about 10 or when the correlations between normative scales are greater than .30. This unreliability is increased by full ipsatization and by inequality of the variances of the normative scales from which the ipsatized scales are derived.

Now this was a very well thought out study – using computer-generated data (N=2000) which allowed normative data to be reconstructed as ipsative – thus permitting a direct “head-to-head” comparison without worrying about confounding by social desirability. This paper really did put to rest the psychometrics part of the debate on ipsative vs. normative measures. The reason why every SHL employee does not have this paper indelibly stamped in their minds is because of several cautionary passages in the paper which do not mesh well with their sales message, one of which I quote below:

“These results show that ipsative and normative scales have a high degree of equivalence only when the normative scales are independent of one another [0.0 correlation between scales]. When there are correlations between the normative scales, the correlations between them and ipsative scales rapidly decrease. When the number of scales is large, reasonable levels of equivalence are only maintained for low levels of normative scale intercorrelation” (Pg. 30, Bartram 1996).

Quite by chance (or maybe not!), the Barrett et al. (1996) paper looking at the OPQ Normative Concept Model analysis was published, containing on page 15, a histogram of the inter-scale correlations of the OPQ within a dataset of 2301 applicants. Of these inter-scale correlations 64 out of 465 were greater than r=0.3 and 149 were greater than r=0.2. Obviously, the level of correlation between scales is low – but not 0.0.

The interesting feature of Bartram’s paper is that he shows that you can compute comparative ipsative scale reliabilities (albeit from a derived formula that works using the normative values to estimate ipsative values). It was left to Helen Baron (1996) (formerly of SHL) to conclude “However, for larger sets of scales (N~30) with low average intercorrelations, ipsative data seems to provide robust statistical results in reliability analysis, but not under factor analysis”. Thus, by her omission, the factor structure of ipsative data is poor. This leaves the practitioner with little knowledge of what construct was indeed measured. This is compounded of course by the fact that the items responded to are different in every case!

Saville and Wilson (1991) responded to criticisms by attempting to demonstrate that ipsative tests manifest equal, if not superior, validity to normative tests. Using a novel, if somewhat ill specified computer-generated dataset, they showed that under certain conditions ipsative and normative tests will yield equivalent psychometric parameters. In addition, they went on to show that, with certain real datasets, the expected statistical results from Johnson et al. (1988) were not observed. However, these conclusions have been challenged by Cornwell and Dunlap (1994) who carried out a re-analysis of the Saville and Wilson data and found little support for their claims. The reality is that gains in validity have not been shown and indeed the scores on ipsative and normative measures are often cited as comparable (Bartram, 2006). So, not only does the practitioner end up with a faulty measure they do so for no comparative gain! Practical and robust are not mutually exclusive. This is a classic red herring to imply that those that take measurement seriously are just pie in the sky. The complete opposite is true. Those interested in psychometrics are the people who want to see things done right so the discipline goes forward.

The Issues in Summary
The key issue is that you cannot practice unless you understand what you are using. To again quote Paul Barrett: “Yes, it is important to have a good bedside manner but this is secondary to knowing what medication to prescribe.”

Ipsative tools:
1. Are a within person measure to be used for individual counseling not comparisons across people.
2. Have questionable psychometric properties
3. Are not resistant to faking 4. Have no demonstrable validity gains
5. Are in the main supported by only one company with a vested interest in determining their usefulness. Their application is therefore more market driven than science driven.

We have a lot of psychological interventions prescribed by people who know little about what they are prescribing. At least with ipsative testing we know what the medication can be prescribed for. The application of ipsative testing for selection, a within person measure, is ill-advised and it is time that this practice was eradicated once and for all on the grounds that I/O psychology is truly a discipline guided by science and not marketing whims.

Now rather than it being ‘MY’ view, and for those that want the references:

Baron, H. (1996) Strengths and limitations of ipsative measurement. Journal of Occupational and Organisational Psychology, 67, 89-100.

Cattell, R.B. (1944) Psychological Measurement: ipsative, normative, and interactive. Psychological Review, 51, 292-303.

Clemans, W. V. (1966) An analytic and empirical investigation of some properties of ipsative measures. Psychometric Monographs, vol.14

Closs, S.J. (1976) Ipsative vs normative interpretation of test scores or “What do you mean by like?”. Bulletin of the British Psychological Society, 29, 228-299

Cornwell, J .M. and Dunlap, W.P. (1994) On the questionable soundness of factoring ipsative data: a response to Saville and Willson. Journal of Occupational and Organisational Psychology, 67, 89-100.

Hicks, L.E. (1970) Some properties of ipsative, normative, and forced-choice normative measures. Psychological Bulletin, 74, 167-184.

Hough, L. and Furnham, A. (2003) Use of Personality Variables in Work Settings. In W. Borman, Ilgen, D.R., and Klimoski, R.J. (eds) Handbook of Psychology, Volume 12: Industrial and Organizational Psychology. New York, Wiley. (Chapter 5, pp 77-106)

Hough, L. and Ones, D. (2001) The Structure, Measurement, Validity, and Use of Personality Variables in Industrial, Work, and Organzational Psychology. Chapter 12 (pp 233-267) in N. Anderson, D. Ones, Sinangil, H., and Viswesvaran, C. (eds.) Handbook of Industrial, Work, and Organizational Psychology, Volume 1: Personnel Psychology. New York: Wiley.

Johnson, C. E., Wood, R., and Blinkhom, S. F. (1988) Spurious and Spurious: the use of ipsative personality tests. Journal of Occupational Psychology, 61, 153-162.

Martin, B.A., Bowen, C., and Hunt, S. (2002) How effective are people at faking on personality questionnaires? Personality & Individual Differences. Vol 32, 2, 247-256.

Saville, P. & Wilson, E. (1991). The reliability and validity of normative and ipsative approaches in the measurement of personality. Journal of Occupational Psychology, 64, 219-238.

Schmit, M.J., and Ryan, A.M. (1993) The big five in personnel selection: factor structure in applicant and non-applicant populations. Journal of Applied Psychology, 78, 6, 966-974.