Author Archives: Dr Paul Englert

About Dr Paul Englert

Psychologist

Usefulness Trumps Validity

Validity is perhaps one of the most misunderstood concepts in HR analytics and psychometrics in particular. This is a topic that I have previously written about on this blog but the message has yet to fully resonate with the HR community. The most common question that OPRA gets asked in relation to any solution we sell, be it an assessment, survey or intervention, continues to be “What is the validity?”

On the face of it this is a perfectly reasonable question. However when probed further, it becomes clear that there remains a gap in understanding what validity translates to in terms of business outcomes. The answer to this question is invariably a validity coefficient rolled off the tongue that then suffices some checkbox prescribed for decision making. Continue reading

Data is an ingredient not the meal: 5 key things to think about to begin turning data into information

Unless you have been shut off from the outside world in recent times you are probably aware that big data is one of the current flavours of the month in business. As an I/O psychologist I’m particularly interested how this concept of big data is impacting thinking about people problems in companies. Indeed, a common request for information that is made to OPRA, whether that is Australia, New Zealand or Singapore, is for help with supposedly big data projects. The irony is that many of these requests are neither primarily about data nor involving big data sets.  Rather what has happened is that the proliferation of talk on big data has made companies realise that they need to start incorporating data into their people decision.

Big data itself is nothing new. OPRA were involved in what could be described, in a New Zealand context, a big data project in the 1990’s attempting to predict future unemployment from, among other variables, psychological data to help in formulating policy on government assistance.   What is new is the technology that has made this type of study far more accessible, the requirement for evidenced based HR decisions, and the natural evolution of people analytics to being a core-part of HR. Continue reading

In Defence of the Scientific Method

I recently listened to a podcast interview with Dr Adam Gazzaley, a neuroscientist and Director of the Gazzaley Lab at UC San Francisco.  While the work of Dr Gazzaley is both interesting and practical, the real take away for me from the podcast was to reconfirm my commitment to the scientific method. This is not to be mistaken for a belief in science, which throughout recent years I have become more and more disillusioned with. Rather, it is to avoid any notion of chucking the baby out with the bathwater and make clear the distinction between the flawed practice of science and the body of techniques that comprise the scientific method.

The scientific method dates back to the 17th century and involves the systematic observation, measurement and experimentation, and the formulation, testing and modification of hypotheses (cf.  https://en.wikipedia.org/wiki/Scientific_method). While not wishing to go into the history of the development of the scientific method, the applications of these principles have since been the basis for societal development. The refinement of this thinking, by the likes of Karl Popper, together with a multi-disciplinary approach with the appropriate use of logic and mathematics, is central in our search for truth (using the term loosely). Continue reading

Is Competition good for Science?

I have been a strong supporter of Capitalism. I believe in free trade, unbridled competition, and the consumer’s right to make choices in their self-interest. Laissez-faire capitalism, and the competition that it breeds, I often see as key to well-functioning economies and competition is essential to good long-term solutions without exception.

As noted I have held this view for a long time, and without exception, but recently I have been deeply challenged as to whether this model is applicable to all pursuits. In particular I am questioning whether competition is truly good for science.  This is not a statement I make lightly and is made after much reflection on the discipline and the nature of the industry I work, both as lecturer and a practitioner of I/O psychology.

There is a growing uprising against what many perceive as the management takeover of universities. This open source article ‘The Academic Manifesto’ speaks of this view and its opening paragraph captures the essence of the article:

“… The Wolf has colonised academia with a mercenary army of professional administrators, armed with spreadsheets, output indicators and audit procedures, loudly accompanied by the Efficiency and Excellence March. Management has proclaimed academics the enemy within: academics cannot be trusted, and so have to be tested and monitored, under the permanent threat of reorganisation, termination and dismissal…”

While I can certainly see efficiencies that can be made in universities and that the need for accountability is high, I can’t help but agree with the writers that the current KPIs don’t meet the grade (no pun intended). The ‘publish or perish’ phenomena works counter to producing quality research that is developed over the long-term.

Competition also leads to a lack of valuable, but not newsworthy, research. This topic has also been discussed previously in this blog (the-problem-with-academia-as-a-medium-of-change-or-critique), but the key issue of replication that is at the heart of our science is sorely lacking (Earp BD and Trafimow D (2015) Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621).

We have created new terms such as HARKing that describe how we have moved away from hypothesis testing, which is central to science, and into defining hypotheses only after the results are in (Bosco, F. A., Aguinis, H., Field, J. G., Pierce, C. A., & Dalton, D. R. (in press). HARKing’s threat to organizational research: Evidence from primary and meta-analytic sources. Personnel Psychology.)

Likewise the increased growth in universities, and the competition between them, without a growth in jobs is being questioned in many countries. When a degree simply becomes a means to an end, does it provide the well-rounded educated population that is required to have a fully functioning progressive Society?

At a practitioner level, the folly of competition is perhaps most apparent in the likes of psychometric testing; an industry I’m acutely familiar with. Test publishers go to great lengths to differentiate themselves so as to carve a niche in the competitive landscape (are-tests-really-that-different) . This is despite the fact that construct validity, which is the centre piece of modern validity theory, in essence requires cross validation.  The result is a myriad of test providers sprouting the “mine is bigger than yours” rhetoric at the detriment of science.  Many times users are more concerned about the colour used in reports than about the science and validity of that test.

Contrast this with a non-competitive approach to science. The examples are numerous, but given the interest in psychology take, as an example, the Human Brain project. Here we have scientists collaborating around a common goal towards a target date of 2023. 112 partners in 24 countries and the driver is not competition but the objective itself of truly expanding our knowledge of the human brain.

We have the US equivalent which called the Brain Initiative and there is further collaboration to create the combined efforts of these two undertakings. With the advancements in physics that has given rise to brain scanning technology, we now understand more than ever about the processes of the mind. This simply would not be possible under the competitive model applied to science.

My experience as a practitioner selling assessment and consulting solutions, as a lecturer who has taught across multiple universities and as a general science buff, have led me to see the downside of competition for science. Competition still has a place in my heart, but perhaps like chardonnay and steak their value may not always be realised when combined.

Tips to spot a myth

Well there it is: another year down and another year to look forward to. This brings to an end this series on some of the myths of our industry and I wanted to finish by summarising some guidelines on how to become more critical about i/o research and the conclusions drawn from our discipline.

Our discipline is not all mythology, as shown in some of my recent posts such as the effectiveness of training and the value of personality testing. On the contrary, there is a growing body of findings that show what works, what doesn’t and why. However, claims move from fact to fiction when commercialisation and academic reputation takes over.

With this in mind, those attempting to apply research need a simple way to test the soundness of what they are reading. Here are my top 7 tips to spotting myths:

  1. Who has done the research? There are many vested interests in psychology. These include commercial firms touting the next big thing through to academics defending a position they have built for themselves. When you understand a person’s starting position you will read what they write with open eyes. When evaluating any claim ask yourself: ‘What is their angle and do they have anything to gain from such a claim? Are they presenting a balanced argument or reporting commercial findings in fair manner.’
  2. Are the claims too good to be true? Dealing with human behaviour is a messy business. Single variables, on a good day with the wind blowing the right direction, account for roughly 10% of the variability (e.g. correlations of r=0.3) in a given outcome (e.g. a personality trait predicting job performance). Unfortunately, the public are unaware of this and have expectations around prediction that are simply unrealistic. These expectations are then played on by marketing companies that make claims such as ‘90% accuracy’. These claims are outrageous and a sure sign that you are very much again in the clutches of a myth.
  3. When looking at applied studies does the research design account for moderator variables? Psychological research often fails to be useful by failing to account for moderator variables. Too often we get simple correlations between variables without recognising that the entire finding is eroded unless certain conditions are met or if another variable enters the scene.
  4. Is the research discussed as part of a system? Building on from the previous point, research that does not discuss their findings as part of a wider eco-system is invariably limited. As scientist-practitioners, our work does not exist in a vacuum. It is part of a complex set of ever changing intertwining variables that go together to produce an outcome. Selection leads to on-boarding, leads to training, leads to performance management and so on and so forth. Research needs to identify this system and report findings accordingly.
  1. Are the results supported by logic as well as numbers? Nothing can blind the reader of i/o science like numbers. As the sophistication of mathematical justification in our discipline has grown the usefulness of many of the studies has dropped. Psychology is as much a philosophy as a science and logic is equally as important numbers to demonstrating an evidence base. Look for studies that follow the laws of logic; where hypotheses are not only supported, but alternative theories dismissed. Look for studies that are parsimonious in their explanation but not so simplistic that they fail to account for the underlying complexity of human behaviour.
  2. Are the results practically meaningful? Don’t be confused by statistical significance. This simply means we have certain confidence levels that a finding was not due to chance and if the study is repeated we are likely to get a similar result. This tells us nothing of the practical significance of the finding (i.e. How useful is this finding? How do I use it?). Too often I see tiny but statistically findings touted as a ‘breakthrough’. The reality is the finding is so small that it is meaningless unless perhaps applied to huge samples.
  3. Be critical first, acquiescence second! If I have one piece of advice it is to be critical first and accept nothing until convinced. Don’t accept anything because of the speaker, the company, or the numbers. Instead make anyone and everyone convince you. How is this done? Ask why. Ask what. Ask how. If you do nothing besides taking this stance as part of a critical review, it will help to make you a far more effective user of researcher and a far better i/o psychologist or HR professional.

To all those who have read and enjoyed this blog over the year, we at OPRA thank you. As a company we are passionate about i/o, warts and all, and it is a great privilege to contribute to the dialogue that challenges our discipline to be all that it can be. Have a great 2015 and we look forward to catching up with you offline and online over the year.

The Myth that Training is an Art not a Science

For many training is seen as an art, and a black art at that, rather than a science. The idea that there is actually a science to training, and a methodology to be followed to ensure its effectiveness, is an anathema to those that view their own training as some special gift that they alone possess. Much like the claims in the psychometric industry that a single test is the holy grail of testing these outrageous training claims are the same myths that simply distract from the truth. On the contrary training is an area that is now well researched and there is indeed a science to making training work.

Building on from their seminal work on training for team effectiveness Salas and his team have produced an excellent paper outlining what the science of training, (Salas, E., Tannenbaum, S.I., Kraiger, K., & Smith-Jentsch, K.A. (2012) The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13, 2, 74-101).

The paper is a free download and is one of those must haves for all practioners. Firstly, the paper covers various meta-analysis that have been conducted on training and note that training has been found to be an effective from everything from managerial training and managerial leadership development through to behavioural modelling training.

Moreover the paper provides clear guidelines as to how to enhance training effectiveness. Building on the research the guidelines for practitioners include:

Pre-training recommendations (Training needs analysis)

    1. Analysis of the job
    2. Analysis of the organisation
    3. Analysis of the person
  1. Communication strategy
    1. Notify attendees
    2. Notify supervisors
  2. During training interventions
    1. Creating the learner mind-set
    2. Following appropriate instructional principles
    3. Using technology wisely.
  3. Post training
    1. Ensure training transfer
    2. Evaluation methodology

The paper in many ways is what our discipline is all about.;there is a strong research base, culminating research from multiple sources, with useful guidance for the practioner provided. This is applied psychology and this is the scientist-practioner model in practice.

As noted by Paul Thayer in his editorial to the paper:

“… There is a system and a science to guide organizations of all types in developing and/or adopting training to help achieve organizational goals. Salas et al. do an excellent job of summarizing what is known and providing concrete steps to ensure that valuable dollars will be spent on training that will improve performance and aid in the achievement of those goals. In addition, they provide a rich bibliography that will assist anyone needing more information as to how to implement any or all the steps to provide effective training. Further, they raise important questions that organizational leaders and policymakers should ask before investing in any training program or technology”.

There are many myths that pervade business psychology. Unfortunately these often result in the baby often being thrown out with the bath water and people dismissing the discipline as a whole. The key for any discerning HR professional and i/o psychologist is to be able to tell the myth from reality and have a simple framework, or check points, to be a discerning reader of research. More on this tomorrow in the last blog for the year.

The myth that training to improve team functioning doesn’t work

Yesterday we noted that there was little support for the Belbin team model. The idea that there is a prescribed model for a team is simply not supported and the Belbin model does not improve organisational effectiveness. Taking this into consideration, does training to improve team functionality actually make a difference?

I’m pleased to note that training to improve team performance is an area that is both well researched and the research is generally positive. Not only do interventions appear to improve team effectiveness, we also have an idea through research as to what moderates the success of team interventions.

In terms of the research around team training, the seminal work in the area was a meta-analysis conducted in 2008. For those not from a research background, a meta-analysis can be thought of as an analysis of analysis. The researchers bring together various studies and re-analyse the data to gain greater confidence in the results through establishing a larger sample size. While the technique has its critics and may lead to statistical over estimates, this is one of the better methods we have to establish an evidence base for generalisable trends in applied research.

The team training effectiveness meta-analysis was extremely thorough in examining both outcomes and moderators. A range of outcomes were assessed, including:

  1. Cognitive outcomes predominantly consisted of declarative knowledge gains.
  2. Team member affective outcomes included socialisation, trust and confidence in team members’ ability and attitudes concerning the perceived effectiveness of team communication and coordination processes.
  3. Team processes  included behavioural measures of communication, coordination, strategy development, self-correction, assertiveness, decision making and situation assessment.
  4. Team performance integrated quantity, quality, accuracy, efficiency and effectiveness outcomes.

Moderator variables included:

  1. Training content (taskwork, teamwork, mixed)
  2. Team stability (intact, ad hoc)
  3. Team size (large, medium, small)

While a blog post is not sufficient to explore the research in depth, suffice to say that moderate to strong positive outcomes were found for all four outcomes. Team process appears to be the most malleable. Training teams to communicate better, avoid group think, make effective decisions and think strategically, is likely to be an investment that delivers returns for organisations. Training to improve affective outcomes, such as trust and confidence in team members, appears less effective. This was especially the case when applied to large teams.

Aside from team size, the results were moderated by team stability with well-established teams responding better to training than ad hoc teams. Training content had limited effect on the outcomes of the training with both task work and team work oriented interventions producing positive results.

The results of this meta-analysis are encouraging for i/o psychology. Team effectiveness is an area where there is a strong research basis for intervention and where intervention is likely to have a positive impact. This is an area where the scientist-practitioner model that is central to our discipline appears to be alive and well.  We have interventions that are well researched and have some understanding of the levels of effectiveness taking into account other variables. Does this lead to science of training? Are there principles we can take from the literature that can be applied to make training effective? Or is training an art and not a science? This is the question for tomorrow.

The myth of team models (Belbin)

In yesterday’s blog we discussed the power of two and the myth of the single star innovator. The follow on from this discussion is naturally: ‘If two is better than one, a team is surely better than two’. Unfortunately, the literature is far less supportive of this idea.

The most pervasive model of team work, especially in the UK, is the idea of the Belbin team. For those not aware, the Belbin model is defined by 9 supposed team types in part defined by orientation to the people side of a task or the thing/doing side of a task. The idea is that teams operate better when these various positions are fulfilled.

The assumptions behind the Belbin team roles don’t stack up to the hype. Firstly, the psychometric properties of the model have been found wanting (Furnham, A., Steele, H. and Pendleton, D. (1993), “A psychometric assessment of the Belbin Team-Role Self-Perception Inventory”,  Journal of Occupational and Organizational Psychology, 66: 245–257; Fisher S.G., Macrosson W.D.K,  Sharp, G. (1996) “Further evidence concerning the Belbin Team Role Self‐Perception Inventory”, Personnel Review, Vol. 25, pp.61 – 67). Research indicates that the model lacks the proposed factor structure and offers little above what a standard personality tool may prescribe in terms of how people would like to work.

In essence, we get the same preferences by simply looking at one’s personality but with the added advantage of a replicable psychological model. While the Belbin model may be useful as a descriptive model, this is different to what one often wants when thinking about such things psychometrically.

Perhaps more importantly, the relationship between the model and actual job performance is weak to say the least. (Wouter van Walbeek, R.B, & Maur, W (2013) “Belbin role diversity and team performance: is there a relationship?”, Journal of Management Development, Vol. 32 , pp.901 – 913). There is no link that this supposed diversity aids team performance. Even leaders under the model failed to demonstrate improved performance.

So what is the ultimate number for a team and what are the team roles that need to be fulfilled? The most accurate answer to this question is ‘it depends’ (the details are covered well in Wikipedia’s description of ‘team’).

Like much of i/o psychology there are no simple answers, and the only people that ever prescribe simple answers are those that have something to sell. To solve real-world problems – such as optimal team size for a given organisation – requires an analysis of the tasks, time frames for completion, competing demands of individuals, competence and willingness of the team and trainability, to name but a few variables. Ours is an applied discipline and what is required is the application of knowledge inside a given system to find individual solutions that work. This is not surprisingly applicable to our work around teams.

I want to make the point that a team is distinct from a ‘group’ and this simple point is often overlooked by practitioners. More often than not when I’m asked to do a ‘team workshop’ it is to help a group of employees who know their jobs well but need to learn how to get along. To describe them as a ‘team’ is to miss the forest for the trees. These groups tend to comprise of people with individual differences that need techniques and models to understand each other better, get along, and harness each other’s strengths and weaknesses. Ironically this type of intervention is what many team interventions consist of. Do these interventions work? This is the topic for tomorrow.

Myths about Teams and Stars – The Myth of the Single Star

I’m a couple of blogs behind for the year. While this is indicative of a busy and successful year at OPRA, it is no excuse for not completing the 12 part series on myths for 2014. So with a week’s holiday, and 5 myths to go, what better time to finish this year’s topic for the OPRA blog? In good scientific fashion this also provides a royal opportunity to test whether a series of blogs over a week is more effective than one a month.

A topic that has many permutations in respect to myths is that of teams and stars. People love the idea of teams but literature and research in the space is less complementary. In this series of posts I want to look at the work of teams both from a practice and literature perspective, and try and separate myth from reality.

To begin I want to look at the anti-hero of the team, namely the notion of the star or sole genius. This is pervasive in modern business culture with the likes of Branson, Jobs and Trump; people perceived as sole innovators of the creativity that defined the businesses they are associated with. This is not to say that these people necessarily endorsed the idea that they themselves were the be-all-and-all. Rather, the common myth purported by society is that the company’s success is mainly attributed to a single individual.

The idea that success can be attributed to one person is not borne out in either pop research or academic research. A recent book highlighted this issue by noting what they term the ‘Power of Two’:

The book examines the process for creativity noting why two is the magic number for the creative process to realise returns. In doing so the book covers the role of serendipity that lies in much a success (a point that is so often glossed over in the literature) i.e. they have to meet! They need to have differences that combine to form a single powerful entity. They must work as a pair but enjoy enough distance and role separation to cultivate distinct ideas. In short it is not the individual that creates success but the individual and their side kick that achieve optimal results.

Evidence for the power of two is also borne out in academic literature. Business decisions are invariably preceded by a decision to act, and when it comes to decision making the power of two is again apparent.

In a 2012 article published in Science (Koriat, A. (2012) ‘When two heads are better than one’. Science, 20 April 2012: Vol. 336 no. 6079 pp. 360-362) evidence was found for two people decision making to be superior to that of the individual. While I will not go into the study in depth, the key is the ability for each individual in the dyad to be able to communicate their confidence in judgements freely (i.e. a truly equal playing field). Thus, where the dyad falls down is when one person’s confidence over powers the pair.

This study builds on earlier work that likewise states that the benefit of the pair comes from the ability to express confidence in decision making freely. The key outcome is thus that to enhance the power of two in decision making, there should be similar levels of competence and the ability to freely express confidence. Once again this shows the inherent nature of psychological research being multi-faced as this invariably will involve having people of similar levels of self-esteem, emotional intelligence, etc. for this magical effect to be optimised.

So if one is not the answer and two is clearly better, what happens when team size increases? More on this tomorrow.

The myth that Criterion related validity is a simple correlation between test score and work outcome

This is a myth that can be discussed with relative simplicity: Criterion validity is far more than the simple correlations that are found in technical manuals. Validity in this sense is more appropriately described as whether an assessment can deliver a proposed outcome in a given setting with a given group.  Criterion validity is thus ‘does this test predict some real world outcome in a real world setting’.

Assessments can add value, as discussed last month, but we need to think deeper about criterion related validity if this value is going to be more effectively demonstrated. Criterion validity is too often determined by correlating a scale on a test (e.g. extroversion) with an outcome (e.g. training). The problem is that neither the scale score nor the outcome exists in a vacuum. They are both sub-parts of greater systems (i.e. both consist of multiple variables). In the case of the test, the scale score is not independently exclusive. Rather, it is one scale among many that have been used to understand a person’s psychological space better (e.g. one of the big five scales). Any work outcome is the sum total of a system working together. Outcomes are likely to be impacted by variables; like the team a person is working in, or the environmental context (both micro and macro), what they are reinforced for, etc.. In a normal research design, these aspects are controlled for, but when it comes to criterion validity correlations reported by test publishers this is unlikely to be the case.

When it comes to criterion validity, we are very much in the dark as to how psychological variables impact work outcomes in the real world despite claims to know otherwise. As an example, let’s consider the variable of conscientiousness. The test publisher research tells us that the higher a person’s conscientiousness the better they are likely to perform on the job. Common sense would tell us that people who are excessively conscientious may however not perform well due to their need to achieve a level of perfection that detracts from delivery in a timely manner. Not surprisingly recent research does not support the idea of a linear correlation in that for many traits too much of the trait is detrimental: Le, H., Oh, I-S., Robbins, S.B., Ilies, R., Holland, E., & Westrick, P. (2011). Too much of a good thing: Curvilinear relationships between personality traits and job performance. Journal of Applied Psychology, 96, 1, 113-133.

This is supported by work I was involved in with Dr. Paul Wood that showed that intelligence and conscientiousness may be negatively correlated in certain circumstances and therefore indicate that there are multiple ways of completing a task to the level of proficiency required. Intelligence Compensation Theory: A Critical Examination of the Negative Relationship Between Conscientiousness and Fluid and Crystallised Intelligence The Australian and New Zealand Journal of Organisational Psychology / Volume 2 /August 2009, pp 19-29. The problem that both studies highlight is that we are simply looking at the concept of criterion validity in a too reductionist manner. These simple 1-1 correlations do not represent validity in terms of how the practitioner would think of the term (“is this going to help me select better”). This question cannot be answered because the question itself requires thinking about the interaction between psychological variables and the unique context that the test will be applied in.

To understand how the problem of validity has become an accepted norm, one must look to the various players in the field. As is often the case, a reductionist view of validity stems from associations such as the BPS, who have simplified the concept of validity to suit their requirements. This then forces test publishers to adhere to this and clamor over each other to produce tables of validity data. The practitioners then understand validity within this paradigm. To add injury to insult, the criteria of quality becomes: to have as many of these seemingly meaningless validity studies as possible, further proliferating this definition of validity. The fact that a closer look at these studies show validity correlation coefficients going off in all sorts of directions is seemingly lost, or deemed irrelevant!

The solution to this nonsense is that the way we think of criterion validity must change. We need to be taking a more holistic approach that is more thorough and system based to answer the real questions practitioners have. This would incorporate both qualitative and quantitative approaches, and is perhaps best captured in the practice of evaluation, which is taking this approach seriously: http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research.

Finally to survive the criteria used to evaluate tests, the likes of the BPS needs to change. Without this change test publishers cannot adopt alternative practices as their tests will not be deemed “up to standard”. So alas, I think we may be stuck with this myth for a bit longer yet.