The Adaptive Skills and Behaviours Required to Succeed in Future Work Environments

There is a lot being said about the future of work, and what this means for the type of skills, attitudes, and behaviours we will require to succeed.  With this future already upon us, it is important that we pick up our pace  of change, and look to build capability that helps us to adapt, thrive and succeed within an ever changing world.  Best selling author, Jacob Morgan, describes in his latest book ‘The Future of Work’ five trends shaping the future of work;

  1. New behaviours
  2. Technology
  3. Millennials
  4. Mobility
  5. Globalisation

These trends are bringing a dramatic shift in attitudes and ways of working; new behaviours, approaches, and workplace expectations.  Whilst many of us are sensing these rapid changes, we aren’t necessarily sure why these changes are happening, what they mean, or how they will impact us.

As Jacob Morgan says:

“The disruption of every industry is also causing a bit of unrest as people struggle to define where they fit or if they will be obsolete.  It’s forcing us to adapt and change to stay relevant while giving rise to new business models, new products, new companies, new behaviours, and new ways of simply existing in today’s world”.

So, the burning questions are:  what exactly do these changes look like for employees, managers, and organisations?  And, what skills, attitudes, and behaviours do we require to succeed?

What we do know is that modern employees are more self-directed, collaborative in their approach, and want to shape and define their own career paths instead of having them predefined for them.  They are continually seeking out learning opportunities that fit with their personal purpose and professional aspirations, and are looking for development opportunities that benefit them holistically as a ‘whole person’.  They seek the skills, confidence and healthy mind-set to challenge the status quo, to think on their feet, and to continually adapt within highly fluid and ever changing organisational environments.  They are looking to learn and develop emotional and social intelligence;  to work within increasingly networked communities;  to lead, collaborate, innovate and share.

Consistent with the above is five crucial behaviours, identified by Morgan, as being required by employees in the modern workplace;

  1. Self-Direction and Autonomy – to continually learn, and stay on top of important tasks within manager-less organisations
  2. Filter and Focus – to be able to manage the cognitive load associated with increasing amounts of pervasive information
  3. Embracing Change – to continually adapt to new working practices whilst demonstrating resilience and healthy mind-sets
  4. Comprehensive Communication Skills – to support collaborative work practices, and to communicate ideas and provide feedback succinctly
  5. Learning to Learn – to be willing to adopt a pro-learning mind-set; to step outside comfort zones, reflect, and make meaning of experiences.

Organisations also need to adapt to the future of work to support these trends and demands, and ensure they are attracting, developing, and retaining top talent.  A good place to start is by fostering and embracing the principles of organisational learning.  Peter Senge suggested in his book ‘The Fifth Discipline: The Art of the Learning Organisation’ that in order for an organisation to remain competitive within the complex and volatile business environments that we find ourselves operating they must build their capacity for continually transforming.  This involves developing cultures that;

  • Encourage and support employees in their pursuit of personal mastery (the discipline of continually clarifying and deepening our personal vision, and seeing reality objectively)
  • Encourage employees to challenge ingrained assumptions and mental models
  • Foster genuine commitment and enrolment through shared visions.

Here at OPRA we are developing a carefully selected set of best-of-breed, soft skill learning and development programmes to help individuals and organisations embrace these current and future trends. Our programmes are designed to equip professionals with the emotional intelligence, healthy thinking, learning agility, collaborative team behaviours, and motivation required to demonstrate exceptional performance within the modern workplace environment.  We have grounded our programmes on the principles of positive psychology, and an understanding that REAL learning and engagement only occurs when self-awareness, participation, and a tangible sense of progress are present. Therefore, and in light of this, all our programmes are designed to;

  • Develop self-insight and raise awareness of individual and collective strengths
  • Utilise proven research based content, delivered by expert and accredited practitioners
  • Provide access to on-going professional coaching opportunities to further deepen learning
  • Incorporate social learning methodologies to encourage and enable collaboration and sharing
  • Provide applied on-the-job challenges and reflection to embed and sustain behavioural changes.

Watch this space for further announcements about OPRA Develop over the coming months. In the meantime, if you would like to discuss how OPRA can support your learning and development with proven, researched based soft-skill development programmes, then please contact your local OPRA office:

Wellington: 04 499 2884 or Wellington@opragroup.com

Auckland: 09 358 3233 or Auckland@opragroup.com

Christchurch: 03 379 7377 or Christchurch@opragroup.com

Australia: +61 2 4044 0450 or support@beilbyopragroup.co.au

Singapore: +65 3152 5720 or Singapore@opragroup.com

Tips to spot a myth

Well there it is: another year down and another year to look forward to. This brings to an end this series on some of the myths of our industry and I wanted to finish by summarising some guidelines on how to become more critical about i/o research and the conclusions drawn from our discipline.

Our discipline is not all mythology, as shown in some of my recent posts such as the effectiveness of training and the value of personality testing. On the contrary, there is a growing body of findings that show what works, what doesn’t and why. However, claims move from fact to fiction when commercialisation and academic reputation takes over.

With this in mind, those attempting to apply research need a simple way to test the soundness of what they are reading. Here are my top 7 tips to spotting myths:

  1. Who has done the research? There are many vested interests in psychology. These include commercial firms touting the next big thing through to academics defending a position they have built for themselves. When you understand a person’s starting position you will read what they write with open eyes. When evaluating any claim ask yourself: ‘What is their angle and do they have anything to gain from such a claim? Are they presenting a balanced argument or reporting commercial findings in fair manner.’
  2. Are the claims too good to be true? Dealing with human behaviour is a messy business. Single variables, on a good day with the wind blowing the right direction, account for roughly 10% of the variability (e.g. correlations of r=0.3) in a given outcome (e.g. a personality trait predicting job performance). Unfortunately, the public are unaware of this and have expectations around prediction that are simply unrealistic. These expectations are then played on by marketing companies that make claims such as ‘90% accuracy’. These claims are outrageous and a sure sign that you are very much again in the clutches of a myth.
  3. When looking at applied studies does the research design account for moderator variables? Psychological research often fails to be useful by failing to account for moderator variables. Too often we get simple correlations between variables without recognising that the entire finding is eroded unless certain conditions are met or if another variable enters the scene.
  4. Is the research discussed as part of a system? Building on from the previous point, research that does not discuss their findings as part of a wider eco-system is invariably limited. As scientist-practitioners, our work does not exist in a vacuum. It is part of a complex set of ever changing intertwining variables that go together to produce an outcome. Selection leads to on-boarding, leads to training, leads to performance management and so on and so forth. Research needs to identify this system and report findings accordingly.
  1. Are the results supported by logic as well as numbers? Nothing can blind the reader of i/o science like numbers. As the sophistication of mathematical justification in our discipline has grown the usefulness of many of the studies has dropped. Psychology is as much a philosophy as a science and logic is equally as important numbers to demonstrating an evidence base. Look for studies that follow the laws of logic; where hypotheses are not only supported, but alternative theories dismissed. Look for studies that are parsimonious in their explanation but not so simplistic that they fail to account for the underlying complexity of human behaviour.
  2. Are the results practically meaningful? Don’t be confused by statistical significance. This simply means we have certain confidence levels that a finding was not due to chance and if the study is repeated we are likely to get a similar result. This tells us nothing of the practical significance of the finding (i.e. How useful is this finding? How do I use it?). Too often I see tiny but statistically findings touted as a ‘breakthrough’. The reality is the finding is so small that it is meaningless unless perhaps applied to huge samples.
  3. Be critical first, acquiescence second! If I have one piece of advice it is to be critical first and accept nothing until convinced. Don’t accept anything because of the speaker, the company, or the numbers. Instead make anyone and everyone convince you. How is this done? Ask why. Ask what. Ask how. If you do nothing besides taking this stance as part of a critical review, it will help to make you a far more effective user of researcher and a far better i/o psychologist or HR professional.

To all those who have read and enjoyed this blog over the year, we at OPRA thank you. As a company we are passionate about i/o, warts and all, and it is a great privilege to contribute to the dialogue that challenges our discipline to be all that it can be. Have a great 2015 and we look forward to catching up with you offline and online over the year.

The Myth that Training is an Art not a Science

For many training is seen as an art, and a black art at that, rather than a science. The idea that there is actually a science to training, and a methodology to be followed to ensure its effectiveness, is an anathema to those that view their own training as some special gift that they alone possess. Much like the claims in the psychometric industry that a single test is the holy grail of testing these outrageous training claims are the same myths that simply distract from the truth. On the contrary training is an area that is now well researched and there is indeed a science to making training work.

Building on from their seminal work on training for team effectiveness Salas and his team have produced an excellent paper outlining what the science of training, (Salas, E., Tannenbaum, S.I., Kraiger, K., & Smith-Jentsch, K.A. (2012) The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13, 2, 74-101).

The paper is a free download and is one of those must haves for all practioners. Firstly, the paper covers various meta-analysis that have been conducted on training and note that training has been found to be an effective from everything from managerial training and managerial leadership development through to behavioural modelling training.

Moreover the paper provides clear guidelines as to how to enhance training effectiveness. Building on the research the guidelines for practitioners include:

Pre-training recommendations (Training needs analysis)

    1. Analysis of the job
    2. Analysis of the organisation
    3. Analysis of the person
  1. Communication strategy
    1. Notify attendees
    2. Notify supervisors
  2. During training interventions
    1. Creating the learner mind-set
    2. Following appropriate instructional principles
    3. Using technology wisely.
  3. Post training
    1. Ensure training transfer
    2. Evaluation methodology

The paper in many ways is what our discipline is all about.;there is a strong research base, culminating research from multiple sources, with useful guidance for the practioner provided. This is applied psychology and this is the scientist-practioner model in practice.

As noted by Paul Thayer in his editorial to the paper:

“… There is a system and a science to guide organizations of all types in developing and/or adopting training to help achieve organizational goals. Salas et al. do an excellent job of summarizing what is known and providing concrete steps to ensure that valuable dollars will be spent on training that will improve performance and aid in the achievement of those goals. In addition, they provide a rich bibliography that will assist anyone needing more information as to how to implement any or all the steps to provide effective training. Further, they raise important questions that organizational leaders and policymakers should ask before investing in any training program or technology”.

There are many myths that pervade business psychology. Unfortunately these often result in the baby often being thrown out with the bath water and people dismissing the discipline as a whole. The key for any discerning HR professional and i/o psychologist is to be able to tell the myth from reality and have a simple framework, or check points, to be a discerning reader of research. More on this tomorrow in the last blog for the year.

The myth that training to improve team functioning doesn’t work

Yesterday we noted that there was little support for the Belbin team model. The idea that there is a prescribed model for a team is simply not supported and the Belbin model does not improve organisational effectiveness. Taking this into consideration, does training to improve team functionality actually make a difference?

I’m pleased to note that training to improve team performance is an area that is both well researched and the research is generally positive. Not only do interventions appear to improve team effectiveness, we also have an idea through research as to what moderates the success of team interventions.

In terms of the research around team training, the seminal work in the area was a meta-analysis conducted in 2008. For those not from a research background, a meta-analysis can be thought of as an analysis of analysis. The researchers bring together various studies and re-analyse the data to gain greater confidence in the results through establishing a larger sample size. While the technique has its critics and may lead to statistical over estimates, this is one of the better methods we have to establish an evidence base for generalisable trends in applied research.

The team training effectiveness meta-analysis was extremely thorough in examining both outcomes and moderators. A range of outcomes were assessed, including:

  1. Cognitive outcomes predominantly consisted of declarative knowledge gains.
  2. Team member affective outcomes included socialisation, trust and confidence in team members’ ability and attitudes concerning the perceived effectiveness of team communication and coordination processes.
  3. Team processes  included behavioural measures of communication, coordination, strategy development, self-correction, assertiveness, decision making and situation assessment.
  4. Team performance integrated quantity, quality, accuracy, efficiency and effectiveness outcomes.

Moderator variables included:

  1. Training content (taskwork, teamwork, mixed)
  2. Team stability (intact, ad hoc)
  3. Team size (large, medium, small)

While a blog post is not sufficient to explore the research in depth, suffice to say that moderate to strong positive outcomes were found for all four outcomes. Team process appears to be the most malleable. Training teams to communicate better, avoid group think, make effective decisions and think strategically, is likely to be an investment that delivers returns for organisations. Training to improve affective outcomes, such as trust and confidence in team members, appears less effective. This was especially the case when applied to large teams.

Aside from team size, the results were moderated by team stability with well-established teams responding better to training than ad hoc teams. Training content had limited effect on the outcomes of the training with both task work and team work oriented interventions producing positive results.

The results of this meta-analysis are encouraging for i/o psychology. Team effectiveness is an area where there is a strong research basis for intervention and where intervention is likely to have a positive impact. This is an area where the scientist-practitioner model that is central to our discipline appears to be alive and well.  We have interventions that are well researched and have some understanding of the levels of effectiveness taking into account other variables. Does this lead to science of training? Are there principles we can take from the literature that can be applied to make training effective? Or is training an art and not a science? This is the question for tomorrow.

The myth of team models (Belbin)

In yesterday’s blog we discussed the power of two and the myth of the single star innovator. The follow on from this discussion is naturally: ‘If two is better than one, a team is surely better than two’. Unfortunately, the literature is far less supportive of this idea.

The most pervasive model of team work, especially in the UK, is the idea of the Belbin team. For those not aware, the Belbin model is defined by 9 supposed team types in part defined by orientation to the people side of a task or the thing/doing side of a task. The idea is that teams operate better when these various positions are fulfilled.

The assumptions behind the Belbin team roles don’t stack up to the hype. Firstly, the psychometric properties of the model have been found wanting (Furnham, A., Steele, H. and Pendleton, D. (1993), “A psychometric assessment of the Belbin Team-Role Self-Perception Inventory”,  Journal of Occupational and Organizational Psychology, 66: 245–257; Fisher S.G., Macrosson W.D.K,  Sharp, G. (1996) “Further evidence concerning the Belbin Team Role Self‐Perception Inventory”, Personnel Review, Vol. 25, pp.61 – 67). Research indicates that the model lacks the proposed factor structure and offers little above what a standard personality tool may prescribe in terms of how people would like to work.

In essence, we get the same preferences by simply looking at one’s personality but with the added advantage of a replicable psychological model. While the Belbin model may be useful as a descriptive model, this is different to what one often wants when thinking about such things psychometrically.

Perhaps more importantly, the relationship between the model and actual job performance is weak to say the least. (Wouter van Walbeek, R.B, & Maur, W (2013) “Belbin role diversity and team performance: is there a relationship?”, Journal of Management Development, Vol. 32 , pp.901 – 913). There is no link that this supposed diversity aids team performance. Even leaders under the model failed to demonstrate improved performance.

So what is the ultimate number for a team and what are the team roles that need to be fulfilled? The most accurate answer to this question is ‘it depends’ (the details are covered well in Wikipedia’s description of ‘team’).

Like much of i/o psychology there are no simple answers, and the only people that ever prescribe simple answers are those that have something to sell. To solve real-world problems – such as optimal team size for a given organisation – requires an analysis of the tasks, time frames for completion, competing demands of individuals, competence and willingness of the team and trainability, to name but a few variables. Ours is an applied discipline and what is required is the application of knowledge inside a given system to find individual solutions that work. This is not surprisingly applicable to our work around teams.

I want to make the point that a team is distinct from a ‘group’ and this simple point is often overlooked by practitioners. More often than not when I’m asked to do a ‘team workshop’ it is to help a group of employees who know their jobs well but need to learn how to get along. To describe them as a ‘team’ is to miss the forest for the trees. These groups tend to comprise of people with individual differences that need techniques and models to understand each other better, get along, and harness each other’s strengths and weaknesses. Ironically this type of intervention is what many team interventions consist of. Do these interventions work? This is the topic for tomorrow.

Myths about Teams and Stars – The Myth of the Single Star

I’m a couple of blogs behind for the year. While this is indicative of a busy and successful year at OPRA, it is no excuse for not completing the 12 part series on myths for 2014. So with a week’s holiday, and 5 myths to go, what better time to finish this year’s topic for the OPRA blog? In good scientific fashion this also provides a royal opportunity to test whether a series of blogs over a week is more effective than one a month.

A topic that has many permutations in respect to myths is that of teams and stars. People love the idea of teams but literature and research in the space is less complementary. In this series of posts I want to look at the work of teams both from a practice and literature perspective, and try and separate myth from reality.

To begin I want to look at the anti-hero of the team, namely the notion of the star or sole genius. This is pervasive in modern business culture with the likes of Branson, Jobs and Trump; people perceived as sole innovators of the creativity that defined the businesses they are associated with. This is not to say that these people necessarily endorsed the idea that they themselves were the be-all-and-all. Rather, the common myth purported by society is that the company’s success is mainly attributed to a single individual.

The idea that success can be attributed to one person is not borne out in either pop research or academic research. A recent book highlighted this issue by noting what they term the ‘Power of Two’:

The book examines the process for creativity noting why two is the magic number for the creative process to realise returns. In doing so the book covers the role of serendipity that lies in much a success (a point that is so often glossed over in the literature) i.e. they have to meet! They need to have differences that combine to form a single powerful entity. They must work as a pair but enjoy enough distance and role separation to cultivate distinct ideas. In short it is not the individual that creates success but the individual and their side kick that achieve optimal results.

Evidence for the power of two is also borne out in academic literature. Business decisions are invariably preceded by a decision to act, and when it comes to decision making the power of two is again apparent.

In a 2012 article published in Science (Koriat, A. (2012) ‘When two heads are better than one’. Science, 20 April 2012: Vol. 336 no. 6079 pp. 360-362) evidence was found for two people decision making to be superior to that of the individual. While I will not go into the study in depth, the key is the ability for each individual in the dyad to be able to communicate their confidence in judgements freely (i.e. a truly equal playing field). Thus, where the dyad falls down is when one person’s confidence over powers the pair.

This study builds on earlier work that likewise states that the benefit of the pair comes from the ability to express confidence in decision making freely. The key outcome is thus that to enhance the power of two in decision making, there should be similar levels of competence and the ability to freely express confidence. Once again this shows the inherent nature of psychological research being multi-faced as this invariably will involve having people of similar levels of self-esteem, emotional intelligence, etc. for this magical effect to be optimised.

So if one is not the answer and two is clearly better, what happens when team size increases? More on this tomorrow.

The myth that Criterion related validity is a simple correlation between test score and work outcome

This is a myth that can be discussed with relative simplicity: Criterion validity is far more than the simple correlations that are found in technical manuals. Validity in this sense is more appropriately described as whether an assessment can deliver a proposed outcome in a given setting with a given group.  Criterion validity is thus ‘does this test predict some real world outcome in a real world setting’.

Assessments can add value, as discussed last month, but we need to think deeper about criterion related validity if this value is going to be more effectively demonstrated. Criterion validity is too often determined by correlating a scale on a test (e.g. extroversion) with an outcome (e.g. training). The problem is that neither the scale score nor the outcome exists in a vacuum. They are both sub-parts of greater systems (i.e. both consist of multiple variables). In the case of the test, the scale score is not independently exclusive. Rather, it is one scale among many that have been used to understand a person’s psychological space better (e.g. one of the big five scales). Any work outcome is the sum total of a system working together. Outcomes are likely to be impacted by variables; like the team a person is working in, or the environmental context (both micro and macro), what they are reinforced for, etc.. In a normal research design, these aspects are controlled for, but when it comes to criterion validity correlations reported by test publishers this is unlikely to be the case.

When it comes to criterion validity, we are very much in the dark as to how psychological variables impact work outcomes in the real world despite claims to know otherwise. As an example, let’s consider the variable of conscientiousness. The test publisher research tells us that the higher a person’s conscientiousness the better they are likely to perform on the job. Common sense would tell us that people who are excessively conscientious may however not perform well due to their need to achieve a level of perfection that detracts from delivery in a timely manner. Not surprisingly recent research does not support the idea of a linear correlation in that for many traits too much of the trait is detrimental: Le, H., Oh, I-S., Robbins, S.B., Ilies, R., Holland, E., & Westrick, P. (2011). Too much of a good thing: Curvilinear relationships between personality traits and job performance. Journal of Applied Psychology, 96, 1, 113-133.

This is supported by work I was involved in with Dr. Paul Wood that showed that intelligence and conscientiousness may be negatively correlated in certain circumstances and therefore indicate that there are multiple ways of completing a task to the level of proficiency required. Intelligence Compensation Theory: A Critical Examination of the Negative Relationship Between Conscientiousness and Fluid and Crystallised Intelligence The Australian and New Zealand Journal of Organisational Psychology / Volume 2 /August 2009, pp 19-29. The problem that both studies highlight is that we are simply looking at the concept of criterion validity in a too reductionist manner. These simple 1-1 correlations do not represent validity in terms of how the practitioner would think of the term (“is this going to help me select better”). This question cannot be answered because the question itself requires thinking about the interaction between psychological variables and the unique context that the test will be applied in.

To understand how the problem of validity has become an accepted norm, one must look to the various players in the field. As is often the case, a reductionist view of validity stems from associations such as the BPS, who have simplified the concept of validity to suit their requirements. This then forces test publishers to adhere to this and clamor over each other to produce tables of validity data. The practitioners then understand validity within this paradigm. To add injury to insult, the criteria of quality becomes: to have as many of these seemingly meaningless validity studies as possible, further proliferating this definition of validity. The fact that a closer look at these studies show validity correlation coefficients going off in all sorts of directions is seemingly lost, or deemed irrelevant!

The solution to this nonsense is that the way we think of criterion validity must change. We need to be taking a more holistic approach that is more thorough and system based to answer the real questions practitioners have. This would incorporate both qualitative and quantitative approaches, and is perhaps best captured in the practice of evaluation, which is taking this approach seriously: http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research.

Finally to survive the criteria used to evaluate tests, the likes of the BPS needs to change. Without this change test publishers cannot adopt alternative practices as their tests will not be deemed “up to standard”. So alas, I think we may be stuck with this myth for a bit longer yet.

Attacking the Myth That Personality Tests Don’t Add Value

For this month’s myth, the last on the topic of psychometrics, I have chosen a slightly different approach. I’m coming out in defence of personality tools; when they are used correctly and understood in the right context. Rather than reinvent the wheel in this regard, I have chosen to highlight what I believe to be a very reasoned article on the topic in Forbes by Tomas Chamorro-Premuzic.

Before posting a direct link to the article, I want to set the context for the value in psychometrics. For me, it is based on 5 key points:

 

  1. Predicting human behaviour is difficult: The psychometric industry often over steps the mark with the levels of prediction it claims to have. Assessments are not crystal balls and the search for the greatest predictive tool which is easily generalizable across multiple contexts is futile. The corollary, however, is not true that due to human complexity assessments have no application and understanding a little more about a person’s behavioural preference, and understanding a framework of personality, has no value. On the contrary, it is valuable for the very reason that human beings are complex and more information on individual differences and frameworks to help us conceptualise behavioural patterns does add value to the people decisions we need to make. Psychometric tools provide a framework for understanding personality and provides a simple, relative measurement model to assist decision-making..

 

  1. Human beings have free-will: It never ceases to amaze me when I meet people who are sycophantic with respect to their devotion to a particular assessment tool. It is as if they choose to ignore the concept of free-will. Behaviour will inevitably change across situations and with different reinforces, and this is so inherent that it needs no further explanation. What psychometric tools can do however is estimate the likelihood of behavioural change and the preference for behaviour. The assessment does not supersede free-will but rather helps us to understand how free-will be displayed a little bit better.

 

  1. Lying, or distortion, is a problem for any assessment method: Lying is something humans often do! A common argument against personality tools is that people may present themselves in an overly positive light. It should be noted that the same criticism can reside in any assessment methodology, from interviews to CVs.  It affects many dimensions of life, from employment to those hoping to meet Mr or Ms Right via an online dating site. Quality personality tools attempt to mitigate this issue with response style indicators such as measures of social desirability, central tendency and infrequency.

 

  1. Behaviour is an indication between the situation and preference: Much like the comment on free-will, the situation should never be ignored when attempting to understand behaviour. Personality tests provide us with part of the puzzle, and in doing so they help us understand how someone is likely to behave. The keyword in that sentence is ‘likely’, and how ‘likely’ depends on the strength of the behavioural preference and the situation.

 

  1. Personality assessments are a simple, coherent and quick method for shedding light on human complexity: The bulk of personality tools are used for recruitment. When recruiting a person, we need to make an expensive decision on limited information and in a short timeframe. This necessitates the need to look at all the feasible ways of making an informed judgement. At its most basic, the instrument is: a collection of items that have been clustered along psychometric principles, resulting in a degree of reliability over time and internal consistency thus giving meaning to a wider trait. A person’s responses are then compared to others who have taken the tests. Assuming the norm is relevant and up-to-date, and with spread, it gives us an indication of the person’s relative behavioural preference against a comparison group of interest. The information is used to make inferences on likely behaviour together with other information collected. That is the sum total of the process. For argument’s sake, the alternative would be to say that human behaviour is all too complex and we should operate without asking any questions at all. That is equally untenable.

 

The problem is not that personality tests have no value, but that practitioners over estimate their value and predictive power. Psychometric test providers may also confuse the issue by over promoting their assessment, marketing their uniqueness; and extolling the magical powers of their special item set. When understood in the right context, personality assessment can add value. When used as part of a complete system, interlinking recruitment to training to performance management, a deeper understanding of how personality impacts company performance can result. I agree that there are some tools that do not meet minimum psychometric standards and as such their usefulness is limited, but for those assessments that attempt to simply ‘do as they say on the tin’ the problem lies not with the assessment but the practice of the users and unrealistic expectations.

 

I strongly encourage you to read this short piece on the seven common, but irrational, reasons for hating personality tests: http://www.forbes.com/sites/tomaspremuzic/2014/05/12/seven-common-but-irrational-reasons-for-hating-personality-tests/

The Myth of Impartiality (Part 2)

Last month, I discussed the issue of impartiality with reference to universities and research. This month, I want to look at the myth of impartiality from the perspective of the users and suppliers of psychometrics. With respect to users, my focus is HR professionals and recruiters. The suppliers I refer to are the plethora of assessment suppliers from around the world.

Practioners

Much of the credibility of psychometric tests is assumed through their application. The general public’s interaction with psychometric assessment comes primarily through the job application process. The corollary is that those who are responsible for those processes must be skilled practitioners in their field and have a highly justifiable reason for the application of psychometrics and the application of a given assessment. This gives rise to the myth of impartiality in reference to practitioners.

The practitioner is often reliant on test providers as their source of information on psychometrics. However, this is akin to asking a financial advisor, who is selling a particular investment, to describe the principles of investment to you! It is important to recall that those who are psychologically trained are subject to issues of impartiality (as discussed in last month’s blog post).

Research has indicated that practitioners’ beliefs in predictive power does not marry with reality https://www2.bc.edu/~jonescq/articles/rynes_AME_2002.pdf. While this may change over time, practitioners who lack the skills needed to read the statistics and understand how the tools are applied are unaware of their own blind-spots when it comes testing.

Examples I have witnessed include:

  • Assuming that the correlation can be read as a percentage (%). For example, a common misconception is assuming that a scale that has a correlation of 0.3 between job performance and conscientiousness accounts for 30% of the variability and not 9%.
  •  Talking about the validity of a test when it is not so much a test that is ‘valid’ as the scales inside the test that correlate with a given outcome.
  • Not understanding that the correlation they are citing as evidence for the value of the test is not linear. According to the research, extreme ends of the scale are best for predictive purposes. However, most practitioners will warn of the problems with extremes. The contradiction between application and research is clear.
  • Assuming a quoted validity is applicable to their organisation. Validity varies greatly between jobs, organisations, and time. These are only 3 variables. To talk of using a given validity as ‘applicable to your organisation’ is often a big leap in logic.
  • Validity is ultimately more than a number on a page. It is a system of interacting parts to produce an outcome. To simplify it to a number makes the commonly relied upon concept near redundant.
  • While many practioners ask about the size of a norm group very few ask about the makeup of the norm group.
  • Those that ask about the makeup of the norm group fail to ask about the spread of data.
  • A classic example is the request for industry based norms. People fail to understand that the request for industry based norms has inherent problems such as the restriction of range that comes by taking a more homogenous sample. This is highly apparent when looking at industry norms for cognitive ability.

A practitioner may be influenced about a product as a result of its branding, rather than its substance, if critical evaluation tools are not used to evaluate the assessment more fully. If a tool is a ‘leadership tool’ than it is presumed what’s needed for leadership. If the assessment claims to ‘predict psychopathic behaviour at work’, than it is assumed that it must do so. The practitioner is convinced that the right tool is found for the job and the brand may even justify its high costs.

Rather than be impartial, practitioners tend to use what they are comfortable with and endorse it accordingly. Often, they don’t have full knowledge of the options available to them http://wolfweb.unr.edu/homepage/ystedham/fulltext2%20467.pdf, and testing may become a tick box service that is transactional rather than strategic in nature. Many HR professionals are so busy with a multitude of HR concerns that they do not have the time to spend on turning psychometrics into a strategic solution. Neither do they investigate validity in a more sophisticated way. Ironically, this then elevates the value of the aura of the psychometric tool and the myth of impartiality continues.

The solution to this problem is relatively simple. Firstly, HR professionals who use assessments need to attend some basic training that covers the myths and realities of psychometric testing. I’m proud to say that OPRA has been running these courses, together with thought pieces like this, since the late 1990’s. The solution however is not to attend an OPRA course but to attend any course that takes a critical look at the application of psychometrics. The second is to understand the limitations of testing and opt for a simple broad brush measure of personality and cognitive ability that is cost effective for the organisation without giving the test more credibility than it is worth. Finally, adopt a more critical outlook to testing that enables one to truly be impartial.

 

Psychometric Test Providers

The final area of impartiality I want to look at is the test providers themselves; it is only fitting that I close with a critical review of the industry I’m entrenched in. The reality is that any claims to impartiality by someone who is selling a solution should be regarded with caution. Many people do not realise that the testing industry is increasingly lucrative as demonstrated by recent acquisitions. For example, in recent times we have seen the $660 million acquisition of SHL by CEB http://ir.executiveboard.com/phoenix.zhtml?c=113226&p=irol-newsArticle&ID=1711430&highlight= or Willey’s purchase of Inscape http://www.disclearningsolutions.com/wiley-acquires-inscape-a-leading-provider-of-disc-based-learning-solutions/, and more recently Profiles International http://www.themiddlemarket.com/news/john-wiley-pays-51-million-for-profiles-international-248848-1.html

It would be naïve to think that such businesses could be truly impartial. The fact is that testing companies build and hold a position much like other industries such as soft drink or food. The result is that innovation ceases and marketing takes over.

No technology of which we are aware- computers, telecommunications, televisions, and so on- has shown the kind of ideational stagnation that has characterized the testing industry. Why? Because in other industries, those who do not innovate do not survive. In the testing industry, the opposite appears to be the case. Like Rocky I, Rocky II, Rocky III, and so on, the testing industry provides minor cosmetic successive variants of the same product where only the numbers after the names substantially change. These variants survive because psychologists buy the tests and then loyally defend them (see preceding nine commentaries, this issue).

Sternberg, R. J., & Williams, W. M. (1997). Does the Graduate Record Examination predict meaningful success in the graduate training of psychologists? A case study. American Psychologist, 52,

The solution to this problem is not innovation for innovation’s sake. This tends to happen when we try to achieve greater levels of measurement accuracy and lose sight of what we are trying to achieve (such as predict outcomes). As an example, the belief that IRT based tests will provide us with greater validity does not appear to be supported by recent studies http://www.nmd.umu.se/digitalAssets/59/59524_em-no-42.pdf and
http://heraldjournals.org/hjegs/pdf/2013/august/adedoyin%20and%20adedoyin.pdf.

Moreover, we can contrast increase measurement sophistication with moves toward the likes of single item scales and the results are surprisingly equivalent: (cf. Samuel, D.B., Mullins-Sweatt. S.N., & Widiger, T.A. (2013) An investigation of the factor structure and convergent and discriminant validity of the Five-Factor Model Rating Form. Assessment, 20, 1, 24-35.)

There is simply a limitation to how much an assessment will ultimately be able to capture the complexity of human behaviour that itself is subject to freewill. It is no more complex than this. Rather than highlighting on the magical uniqueness of their test, psychometric test providers need to be upfront about the limitations of their assessments. No one has access to a crystal ball and claims that one exists are fundamentally wrong.

The future for testing companies lies in acknowledging the limitations of their tests and recognising that they are simply part of an HR ecosystem. It is within that system that innovation can reside. The focus then moves away from trying to pretend that a given test is significantly better than others, and instead focuses on the how the test will add value through such things as:

  • Integration with an applicant tracking system to aid screening
  • Integration with learning and development modules to aid learning
  • Integration with on-boarding systems to ensure quick transition into work.

There are a range of solid respectable tests available and their similarities are far greater than their differences. Tests should meet minimum standards, but once these standard are met, the myth of impartiality is only addressed but accepting that there are a collection of quality tools of equivalent predictive power and the eco-system not the assessment should be the focus point.

I realise I’m still a myth behind in the series and will follow up with a short piece that provides more support for the use of psychometrics in industry; addressing the myth that psychometric tests have little value for employment selection.

Effective Talent Management

There is no doubt that more and more organisations are implementing talent management strategies and frameworks. However whilst talent management is fast becoming a strategic priority for many organisations, Collings & Mellahi (2009) suggest that the topic of talent management lacks a consistent definition and is still largely undefined. Literature reviews reveal that one reason for this is that the empirical question of “what is talent?” has been left unanswered.

The term talent has undergone considerable change over the years. It was originally used in the ancient world to denote a unit of money, before adopting a meaning of inclination or desire in the 13th century, and natural ability or aptitude in the 14th century (Tansley 2011, as cited in Meyers, Woerkom, & Dries, 2013). Today’s dictionary definition of talent is “someone who has a natural ability to be good at something, especially: without being taught” (Cambridge Dictionaries Online, 2014).  This definition implies that talent is innate rather than acquired. This holds important implications for the application of talent management in practice. For example, it influences whether we should focus more on the identification/selection of talent or the development of talent.

Talent management is defined as “an integrated, dynamic process, which enables organisations to define, acquire, develop, and retain the talent that it needs to meet its strategic objectives” (Bersin, 2008).

Integrated talent management implies we take a more holistic approach; starting with the identification of key positions and capabilities required which contribute to an organisations sustainable competitive advantage (Collings & Mellahi, 2009). Equipped with this information we are better able to gather talent intelligence to help determine capability gaps, identify individual potential, and any areas for development.  Talent intelligence and performance tools capable of gathering this type of information include: well validated psychometric assessments, 360° surveys, engagement surveys, post appointment and exit interviews etc. Strategic and integrated talent management is not only essential in the current market, but provides an opportunity to be pro-active rather than reactive in addressing your critical talent needs.

We suggest that key components of an effective talent management process would include:

  1. A clear understanding of the organisations current and future strategies.
  2. Knowledge of key positions and the associated knowledge, skills, and abilities required (job analysis and test validation projects can assist here).
  3. Objective metrics that identify gaps between the current and required talent to drive business success.
  4. A plan designed to close these gaps with targeted actions such as talent acquisition and talent development.
  5. Integration with HR systems and processes across the employee lifecycle.

What is clear is that talent management is becoming more and more important as organisations fight for the top talent in a tight job market. Key to success will be identifying what ‘talent’ looks like for your organisation and working to ensure they are fostered through the entire employment lifecycle.

 

Meyers, M. C., van Woerkom, M., & Dries, N. (2013). Talent—Innate or acquired? Theoretical considerations and their implications for talent management. Human Resource Management Review, 23(4), 305-321.

Collings, D. G., & Mellahi, K. (2009). Strategic talent management: A review and research agenda. Human Resource Management Review, 19(4), 304-313.

Bersin Associates. (2008). Talent Management Factbook.