Category Archives: I/O Psychology

NZ Health and Safety Legislation; Fostering Wellbeing and Resilience

With the recent changes to Health and Safety (H&S) legislation, New Zealand employers need to ensure that they are taking all practicable steps to ensure both the mental and physical well-being of their employees is being protected within the workplace. This includes adopting proactive and developmental approaches that foster wellbeing and resilience; which is not only an obligation under law, but is also instrumental in facilitating higher levels of performance. Continue reading

Data is an ingredient not the meal: 5 key things to think about to begin turning data into information

Unless you have been shut off from the outside world in recent times you are probably aware that big data is one of the current flavours of the month in business. As an I/O psychologist I’m particularly interested how this concept of big data is impacting thinking about people problems in companies. Indeed, a common request for information that is made to OPRA, whether that is Australia, New Zealand or Singapore, is for help with supposedly big data projects. The irony is that many of these requests are neither primarily about data nor involving big data sets.  Rather what has happened is that the proliferation of talk on big data has made companies realise that they need to start incorporating data into their people decision.

Big data itself is nothing new. OPRA were involved in what could be described, in a New Zealand context, a big data project in the 1990’s attempting to predict future unemployment from, among other variables, psychological data to help in formulating policy on government assistance.   What is new is the technology that has made this type of study far more accessible, the requirement for evidenced based HR decisions, and the natural evolution of people analytics to being a core-part of HR. Continue reading

Is Competition good for Science?

I have been a strong supporter of Capitalism. I believe in free trade, unbridled competition, and the consumer’s right to make choices in their self-interest. Laissez-faire capitalism, and the competition that it breeds, I often see as key to well-functioning economies and competition is essential to good long-term solutions without exception.

As noted I have held this view for a long time, and without exception, but recently I have been deeply challenged as to whether this model is applicable to all pursuits. In particular I am questioning whether competition is truly good for science.  This is not a statement I make lightly and is made after much reflection on the discipline and the nature of the industry I work, both as lecturer and a practitioner of I/O psychology.

There is a growing uprising against what many perceive as the management takeover of universities. This open source article ‘The Academic Manifesto’ speaks of this view and its opening paragraph captures the essence of the article:

“… The Wolf has colonised academia with a mercenary army of professional administrators, armed with spreadsheets, output indicators and audit procedures, loudly accompanied by the Efficiency and Excellence March. Management has proclaimed academics the enemy within: academics cannot be trusted, and so have to be tested and monitored, under the permanent threat of reorganisation, termination and dismissal…”

While I can certainly see efficiencies that can be made in universities and that the need for accountability is high, I can’t help but agree with the writers that the current KPIs don’t meet the grade (no pun intended). The ‘publish or perish’ phenomena works counter to producing quality research that is developed over the long-term.

Competition also leads to a lack of valuable, but not newsworthy, research. This topic has also been discussed previously in this blog (the-problem-with-academia-as-a-medium-of-change-or-critique), but the key issue of replication that is at the heart of our science is sorely lacking (Earp BD and Trafimow D (2015) Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621).

We have created new terms such as HARKing that describe how we have moved away from hypothesis testing, which is central to science, and into defining hypotheses only after the results are in (Bosco, F. A., Aguinis, H., Field, J. G., Pierce, C. A., & Dalton, D. R. (in press). HARKing’s threat to organizational research: Evidence from primary and meta-analytic sources. Personnel Psychology.)

Likewise the increased growth in universities, and the competition between them, without a growth in jobs is being questioned in many countries. When a degree simply becomes a means to an end, does it provide the well-rounded educated population that is required to have a fully functioning progressive Society?

At a practitioner level, the folly of competition is perhaps most apparent in the likes of psychometric testing; an industry I’m acutely familiar with. Test publishers go to great lengths to differentiate themselves so as to carve a niche in the competitive landscape (are-tests-really-that-different) . This is despite the fact that construct validity, which is the centre piece of modern validity theory, in essence requires cross validation.  The result is a myriad of test providers sprouting the “mine is bigger than yours” rhetoric at the detriment of science.  Many times users are more concerned about the colour used in reports than about the science and validity of that test.

Contrast this with a non-competitive approach to science. The examples are numerous, but given the interest in psychology take, as an example, the Human Brain project. Here we have scientists collaborating around a common goal towards a target date of 2023. 112 partners in 24 countries and the driver is not competition but the objective itself of truly expanding our knowledge of the human brain.

We have the US equivalent which called the Brain Initiative and there is further collaboration to create the combined efforts of these two undertakings. With the advancements in physics that has given rise to brain scanning technology, we now understand more than ever about the processes of the mind. This simply would not be possible under the competitive model applied to science.

My experience as a practitioner selling assessment and consulting solutions, as a lecturer who has taught across multiple universities and as a general science buff, have led me to see the downside of competition for science. Competition still has a place in my heart, but perhaps like chardonnay and steak their value may not always be realised when combined.

Learning agility: where wisdom meets courageous problem solving

The Iliad is the earliest piece of Western literature and illustrates the generally distinct characteristics of wisdom versus problem solving with risk and courage. King Nestor the wise might miss opportunities for gain due to his caution, but is renowned for eventually making great decisions based on his judgement, knowledge, and experience. While Odysseus has a great ability to courageously solve problems in circumstances of extreme risk, but more often than not gets himself into such situations due to his own lack of wisdom!

The title of this blog suggests that learning agility bridges this gap between Nestor’s wisdom and Odysseus’s courageous problem solving.  So what exactly do we mean by “learning agility”? While the ability to learn can be broadly defined by one’s ability and willingness to do so, learning agility concerns the speed with which people learn and the flexibility with which they apply that learning.  A hallmark of the agile learner is their ability to learn from previous experience and apply that learning in current situations, often in creative or unique ways.  Sounds wise right? Continue reading

The myth that Criterion related validity is a simple correlation between test score and work outcome

This is a myth that can be discussed with relative simplicity: Criterion validity is far more than the simple correlations that are found in technical manuals. Validity in this sense is more appropriately described as whether an assessment can deliver a proposed outcome in a given setting with a given group.  Criterion validity is thus ‘does this test predict some real world outcome in a real world setting’.

Assessments can add value, as discussed last month, but we need to think deeper about criterion related validity if this value is going to be more effectively demonstrated. Criterion validity is too often determined by correlating a scale on a test (e.g. extroversion) with an outcome (e.g. training). The problem is that neither the scale score nor the outcome exists in a vacuum. They are both sub-parts of greater systems (i.e. both consist of multiple variables). In the case of the test, the scale score is not independently exclusive. Rather, it is one scale among many that have been used to understand a person’s psychological space better (e.g. one of the big five scales). Any work outcome is the sum total of a system working together. Outcomes are likely to be impacted by variables; like the team a person is working in, or the environmental context (both micro and macro), what they are reinforced for, etc.. In a normal research design, these aspects are controlled for, but when it comes to criterion validity correlations reported by test publishers this is unlikely to be the case.

When it comes to criterion validity, we are very much in the dark as to how psychological variables impact work outcomes in the real world despite claims to know otherwise. As an example, let’s consider the variable of conscientiousness. The test publisher research tells us that the higher a person’s conscientiousness the better they are likely to perform on the job. Common sense would tell us that people who are excessively conscientious may however not perform well due to their need to achieve a level of perfection that detracts from delivery in a timely manner. Not surprisingly recent research does not support the idea of a linear correlation in that for many traits too much of the trait is detrimental: Le, H., Oh, I-S., Robbins, S.B., Ilies, R., Holland, E., & Westrick, P. (2011). Too much of a good thing: Curvilinear relationships between personality traits and job performance. Journal of Applied Psychology, 96, 1, 113-133.

This is supported by work I was involved in with Dr. Paul Wood that showed that intelligence and conscientiousness may be negatively correlated in certain circumstances and therefore indicate that there are multiple ways of completing a task to the level of proficiency required. Intelligence Compensation Theory: A Critical Examination of the Negative Relationship Between Conscientiousness and Fluid and Crystallised Intelligence The Australian and New Zealand Journal of Organisational Psychology / Volume 2 /August 2009, pp 19-29. The problem that both studies highlight is that we are simply looking at the concept of criterion validity in a too reductionist manner. These simple 1-1 correlations do not represent validity in terms of how the practitioner would think of the term (“is this going to help me select better”). This question cannot be answered because the question itself requires thinking about the interaction between psychological variables and the unique context that the test will be applied in.

To understand how the problem of validity has become an accepted norm, one must look to the various players in the field. As is often the case, a reductionist view of validity stems from associations such as the BPS, who have simplified the concept of validity to suit their requirements. This then forces test publishers to adhere to this and clamor over each other to produce tables of validity data. The practitioners then understand validity within this paradigm. To add injury to insult, the criteria of quality becomes: to have as many of these seemingly meaningless validity studies as possible, further proliferating this definition of validity. The fact that a closer look at these studies show validity correlation coefficients going off in all sorts of directions is seemingly lost, or deemed irrelevant!

The solution to this nonsense is that the way we think of criterion validity must change. We need to be taking a more holistic approach that is more thorough and system based to answer the real questions practitioners have. This would incorporate both qualitative and quantitative approaches, and is perhaps best captured in the practice of evaluation, which is taking this approach seriously: http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research.

Finally to survive the criteria used to evaluate tests, the likes of the BPS needs to change. Without this change test publishers cannot adopt alternative practices as their tests will not be deemed “up to standard”. So alas, I think we may be stuck with this myth for a bit longer yet.

Effective Talent Management

There is no doubt that more and more organisations are implementing talent management strategies and frameworks. However whilst talent management is fast becoming a strategic priority for many organisations, Collings & Mellahi (2009) suggest that the topic of talent management lacks a consistent definition and is still largely undefined. Literature reviews reveal that one reason for this is that the empirical question of “what is talent?” has been left unanswered.

The term talent has undergone considerable change over the years. It was originally used in the ancient world to denote a unit of money, before adopting a meaning of inclination or desire in the 13th century, and natural ability or aptitude in the 14th century (Tansley 2011, as cited in Meyers, Woerkom, & Dries, 2013). Today’s dictionary definition of talent is “someone who has a natural ability to be good at something, especially: without being taught” (Cambridge Dictionaries Online, 2014).  This definition implies that talent is innate rather than acquired. This holds important implications for the application of talent management in practice. For example, it influences whether we should focus more on the identification/selection of talent or the development of talent.

Talent management is defined as “an integrated, dynamic process, which enables organisations to define, acquire, develop, and retain the talent that it needs to meet its strategic objectives” (Bersin, 2008).

Integrated talent management implies we take a more holistic approach; starting with the identification of key positions and capabilities required which contribute to an organisations sustainable competitive advantage (Collings & Mellahi, 2009). Equipped with this information we are better able to gather talent intelligence to help determine capability gaps, identify individual potential, and any areas for development.  Talent intelligence and performance tools capable of gathering this type of information include: well validated psychometric assessments, 360° surveys, engagement surveys, post appointment and exit interviews etc. Strategic and integrated talent management is not only essential in the current market, but provides an opportunity to be pro-active rather than reactive in addressing your critical talent needs.

We suggest that key components of an effective talent management process would include:

  1. A clear understanding of the organisations current and future strategies.
  2. Knowledge of key positions and the associated knowledge, skills, and abilities required (job analysis and test validation projects can assist here).
  3. Objective metrics that identify gaps between the current and required talent to drive business success.
  4. A plan designed to close these gaps with targeted actions such as talent acquisition and talent development.
  5. Integration with HR systems and processes across the employee lifecycle.

What is clear is that talent management is becoming more and more important as organisations fight for the top talent in a tight job market. Key to success will be identifying what ‘talent’ looks like for your organisation and working to ensure they are fostered through the entire employment lifecycle.

 

Meyers, M. C., van Woerkom, M., & Dries, N. (2013). Talent—Innate or acquired? Theoretical considerations and their implications for talent management. Human Resource Management Review, 23(4), 305-321.

Collings, D. G., & Mellahi, K. (2009). Strategic talent management: A review and research agenda. Human Resource Management Review, 19(4), 304-313.

Bersin Associates. (2008). Talent Management Factbook.

Outplacement: What are ‘Employers of Choice’ doing in the Face of Job Cuts?

With the current downturn in the mining industry, management are making tough decisions regarding asset optimisation, cost management, risk management and profitability. Naturally, head count is being scrutinised more closely than ever. What isn’t hitting the headlines is what genuine ‘employers of choice’ are doing to support their exiting workforce and their remaining staff.

A leading Global Engineering Consultancy recently made a corporate decision to discontinue a once profitable consulting arm of their Australian operation. With increased competition, reduction in mining demand and eroding profit margin a very difficult restructure resulted in the redundancy of 40 national engineering roles. As an employee-owned organisation that lives its company values which include Teamwork, Caring, Integrity, and Excellence, this decision was not made easily. Throughout the decision-making process Management was naturally mindful to uphold these values, and BeilbyOPRA Consulting was engaged to provide Outplacement and Career Transition services to individuals for a period of up to 3 months.

The objectives of the project were to ensure that individual staff were adequately supported through this period of transition and ultimately, to gain alternate employment as quickly as possible.

BeilbyOPRA’s Solution:

BeilbyOPRA Consulting’s solution was led by a team of Organisational Psychologists and supported by Consultants being on site in seven locations throughout Australia on the day that the restructure was communicated to employees. Consultants provided immediate support to displaced individuals through an initial face-to-face meeting, where the Career Transition program was introduced.  From here, individuals chose whether or not to participate in the program, the key topics of which included:

  • Taking Stock – Understanding and effectively managing the emotional reactions to job change.
  • Assessment – Identifying skills and achievements through psychometric assessment and feedback sessions.
  • Preparation – Learning about time management skills; developing effective marketing tools; resume writing and cover letter preparation; telephone techniques.
  • Avenues to Job Hunting – Tapping into the hidden job market; responding to advertisements; connecting with recruitment consultants.
  • Interviews – Formats; preparation; how to achieve a successful interview.
  • Financial Advice – BeilbyOPRA partnered with a national financial services firm to offer participants complimentary financial advice.

 The Outcome:

Of the 40 individuals whose positions were made redundant:

  • 78% engaged in the first day of the program.
  • Of this group, 48% participated in the full program, as the remainder only utilised one or two of the services before securing employment.
  • 83% of those who participated in the full program gained employment within 3 months

Some of the learning outcomes from this project for organisations include:

  • Conduct thorough due diligence before committing to the restructure.
  • Create a steering committee to oversee the redundancy process.
  • Ensure accurate, relevant and timely communication is provided to all those involved.
  • Have a trial run of the entire process.
  • Have a dedicated internal project manager to facilitate the outplacement project.
  • Ensure that the staff who remain employed with your organisation, ‘the survivors’, are informed and supported.

In summary, the value of outplacement support was best captured by the National HR Manager who stated:

“It is about supporting staff and upholding our values through good and difficult times. From a legal, cultural and branding perspective outplacement support is critical. As the market changes we will hope to re-employ some of the affected staff and some will become clients in the future’.

The Myth of Impartiality: Part 1

In last month’s post I signed off by noting that impartiality was a pervasive myth in the industry. The corollary is that assuming impartiality allows many of the myths in the industry to not only continue but flourish. Very few in the industry can lay claims to being completely impartial, yours truly included. The industry at all levels has inherent biases that any critical psychologist must be mindful of. The bias starts at with university and research and then the myth is passed, often by practioners on to the consumer (be that person or organisation).

A colleague recently sent me a short paper that I think is compulsory reading for anyone with a critical mind in the industry. The article uses the metaphor of Dante’s Inferno to discuss the demise of science. Keeping with the theme, I would like use another biblical metaphor of the Four Horseman of the Apocalypse in reference to the myth of impartiality. These Horsemen represent the four areas where impartiality is professed but often not practiced, resulting in a discipline that fails to deliver for its followers the Promised Land being touted. The Four Horsemen in this instance are: University, Research, Practioners, and Human Resources.

Unlike the biblical version, destiny is in our hands and I want to continue to present solutions rather than simply highlight problems. Thus, each of the Four Horseman of impartiality can be defended (or at least be inflicted with a flesh wound) with some simple virtuous steps that attack the myth of impartiality. Sometimes these steps require nothing more than acknowledging that the science and practice of psychology is not impartial. Other times we are called to address the impartiality directly. Because of the length of the topic, I will break this into two blogs for our readers.

 

Universities

Many universities are best thought of as corporations. Their consumers are students. Like any other corporation they must market to attract consumers (students) and give students what they want (degrees). To achieve this end a factory type process is often adopted; which in the world of education often means teaching, and having students repeat and apply rules. Moreover, students want to at least feel that they are becoming educated and numbers and rules provide this veil. Finally, the sheer complexity of human behaviours means that restrictive paradigms for psychology are adopted in opposition to a deep critical analysis of the human condition. This in turn gives the much-needed scale required to maximise the consumer base (i.e. easy to digest product, respectability, capacity to scale the production (education) for mass consumption).

 

For this reason Psychology is often positioned purely as a science, which it is not. This thinking is reinforced by an emphasis on quantitative methodologies reinforcing the myth of measurement. Papers are presented without recognising the inherent weaknesses and limitations of what is being discussed. Quality theoretical thinking is subordinated to statistics. The end result is that while university is presented as an impartial place of learning, this ignores the drivers for impartiality that are inherent in the system. Often the rules of learning that are created to drive the learning process do so to meet the needs of the consumer and increase marketability and the expense of impartial education. Those who come out of the system may fail to fully appreciate the limitations in their knowledge, and as the saying goes ‘a little knowledge is dangerous’.

 

University is the most important of the Four Horsemen of impartiality as it is within university that many of the other myths are generated. By training young minds in a way of thinking and appearing impartial, universities create ‘truths’ in the discipline that are simply a limited way of viewing the topic. This results in myths like the myth of measurement (and various conclusions drawn from research), that become accepted as truth and students graduate with faulty information or over confidence in research findings. Those who do not attend university, but hold graduates with a degree of esteem, likewise fail to understand that they are now also victims of a myth of impartiality.

 

The virtuous steps

 This blog is too short to address all the shortcomings of universities in the modern environment. However if we don’t, we will lose more and more quality researchers and teachers from our ranks [see: http://indecisionblog.com/2014/04/07/viewpoint-why-im-leaving-academia/]. What I suggest is that Psychology re-embrace its theoretical roots by being more multi-discipline in its approach, incorporating science and statistics with the likes of philosophy and sociology.

 

The second step is to make compulsory a course in ‘Critical Psychology’. This would in turn go beyond the sociopolitical definition of critical psychology often given and focus on issues of critique as discussed in these blogs. These would include: issues of measurement, the role of theory, the problems of publish or perish, etc. In short, a course that covers the problems inherent in the discipline; acknowledging that these are things that every psychologist, applied or researching, must be mindful of. For the Universities already taking these steps in a meaningful way I commend you.

Research

The idea that research is impartial has been dismissed some time ago by all but the most naïve. The problem is not so much one of deliberate distortion, although this can be problematic also as we will see later on. Rather it is the very system of research that is not impartial.

Firstly there is the whole ‘publish or perish’ mentality that pervades all those that conduct research, whether academics or applied psychologists. Researchers are forced by market drivers or university standards to publish as much as possible as ‘evidence’ that we are doing our job. The opportunity cost is simply that quality research is often in short supply. For one of the best summaries of this problem I draw your attention to Trimble, S.W., Grody, W.W., McKelvey, B., & Gad-el-Hak, M. (2010). The glut of academic publishing: A call for a new culture. Academic Questions, 23, 3, 276-286. There are many powerful points made in this paper and some of the key points are that quality research takes time and is counter to the ‘publish or perish’ mentality. Moreover, a real contribution often goes against conventional wisdom and therefore puts one in the direct firing of many current contemporaries.

Why does this glut occur? I can think of three key reasons.

The first is that researchers are often graded by the quantity, not quality, of the work they produce. The general public tends not to distinguish between grades of journals, and academic institutions have key performance indicators that require a certain number of publications per year.

The second reason is that journals create parameters by which research will be accepted. I have discussed this topic to death in the past, but evidence of bias include favouring novel findings to replication, favouritism to papers that reject the null hypothesis, and numbers as the criteria of supporting evidence over logic and theory. This in turn creates a body of research that projects itself as the body-of-knowledge in our discipline when in reality it is simply a fraction, and distorted fraction at that, of how we understand human complexity (c.f. Francis, G. (2014). The frequency of excess success for articles in Psychological Science. Psychonomic Bulletin and Review (In Press; http://www1.psych.purdue.edu/~gfrancis/pubs.htm ),1-26).

Abstract: Recent controversies have questioned the quality of scientific practice in the field of psychology, but these concerns are often based on anecdotes and seemingly isolated cases. To gain a broader perspective, this article applies an objective test for excess success to a large set of articles published in the journal Psychological Science between 2009-2012. When empirical studies succeed at a rate higher than is appropriate for the estimated effects and sample sizes, readers should suspect that unsuccessful findings were suppressed, the experiments or analyses were improper, or that the theory does not properly account for the data. The analyses conclude problems for 82% (36 out of 44) of the articles in Psychological Science that have four or more experiments and could be analyzed.

The third reason is funding. Where money is involved there is always a perverse incentive to distort. This occurs in universities where funding is an issue, and through industry where a psychologist may be brought in to evaluate such an intervention. The reasons are obvious and are often more subtle than straight distortion. For example, universities that require funding from certain beneficiaries may be inclined to undertake research that, by design, returns positive findings in a certain area, thus being viewed positively by grants committees. The same may be true in industry where an organisational psychology company is asked to evaluate a social programme but the terms of the evaluation are such that the real negative findings (such as opportunity cost) are hidden. This had led to calls for transparency in the discipline, such as in Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K., Gerber, A., Glennerster, R.,Green, D., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B.A., …, Simonsohn, U., & Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343, 6166, 30-31. While the paper makes a strong argument for quality design it also notes the trouble with previse incentives:

Accompanying these changes, however, is a growing sense that the incentives, norms, and institutions under which social science operates undermine gains from improved research design. Commentators point to a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results are published more easily than null, replication, or perplexing results ( 3, 4). Social science journals do not mandate adherence to reporting standards or study registration, and few require data-sharing. In this context, researchers have incentives to analyze and present data to make them more “publishable,” even at the expense of accuracy. Researchers may select a subset of positive results from a larger study that overall shows mixed or null results (5) or present exploratory results as if they were tests of pre-specified analysis plans (6).

Then there are the outright frauds (see: http://en.wikipedia.org/wiki/Diederik_Stapel). For those who have not read this in other blogs I urge you to look at this New York Times interview. My favourite quote:

“People think of scientists as monks in a monastery looking out for the truth,” he said. “People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”… What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman…

 

The virtuous steps

 To address this issue of impartiality of research we need a collective approach. Universities that have a commitment to research must aim for quality over quantity and allow researchers the time to develop quality research designs that can be tested and examined over longer periods. Research committees must be multi-disciplinary to make sure that a holistic approach to research prevails.

We must have arm’s-length between funding and research. I don’t have an answer for how this would occur, but until it does universities will be dis-incentivised to conduct fully impartial work. Journals need to be established that provide an outlet for comprehensive research. This will see a removal of word limits in favour of comprehensive research designs that attempt to cover more for alternative hypothesis to be tested and dismissed. Systems thinking needs to become the norm and not the exception.

Finally, and most importantly, our personal and professional ethics must be paramount. We must contribute to the body of knowledge that is critiquing the discipline for the improvement of psychology. We must make sure that we are aware of any myth of impartiality in our work and make this explicit while trying to limits its effect on our work; whether it is as a researcher or applied. We must challenge the institutions (corporate and universities) we work for to raise the game, in incremental steps

In Part Two, I will take a critical look at my industry, psychometric testing and applied psychology, and how the myth of impartiality is prevalent. I will also discuss how this is then furthered by those who apply our findings within Human Resource departments.

Myth 3: The Myth of Measurement

I would like to begin by apologising for not getting a myth out last month. I was working in the Philippines. Having just arrived back in Singapore I will make sure to get out two myths this month.

The first myth for April that I wish to highlight is a myth that some may see as almost to commit sacrilege in the industry. The idea that I wish to challenge is that I/O psychology can truly be classed as a strong measurement science. To be clear, I’m not saying that I/O is not a science or that it does not attempt to measure aspects of human behaviour related to work. Rather what I’m suggesting is that it is not measurement as the word is commonly used. The corollary is to talk of measurement in our field if it was similar to the common use of the term and in doing so give the discipline more predictability and rigour than it deserves.

The classic paper that challenged my thinking in regards measurement was ‘Is Psychometrics Pathological Science‘ by Joel Michell.

Abstract

Pathology of science occurs when the normal processes of scientific investigation break down and a hypothesis is accepted as true within the mainstream of a discipline without a serious attempt being made to test it and without any recognition that this is happening. It is argued that this has happened in psychometrics: The hypothesis, upon which it is premised, that psychological attributes are quantitative, is accepted within the mainstream, and not only do psychometricians fail to acknowledge this, but they hardly recognize the existence of this hypothesis at all.

In regards to measurement, Michell presents very clear and concise arguments about what constitutes measurable phenomena and why psychological attributes fail this test. While in parts these axioms are relatively technical, the upshot is that just because a quantitative variable is ordered does not itself constitute measurement. Rather, ‘measurement’ requires further hurdles to be adhered to. A broad example of this concept is addititivity and the many associated operations that come when variables (or combinations) are added to produce a third variable, or provide support for an alternative equation. Psychological attributes fail on this and many other properties of measurement. As such, the basis for claims of measurement, in my opinion, are limited (or at least come with caution and disclaimers) and therefore the basis for much of the claim to being part of the ‘measurement-based-science’ school is not substantiated.

The limitations of the discipline as a measurement science is so fundamental that it should challenge the discipline far more than is currently so. The outcome should be both a downplaying of measurement practices and a greater focus on areas such as theory building which is then tested using a range of alternative methodologies. These same calls for the discipline have been made over the past few years and the disquiet in the discipline is growing:

Klein, S.B. (2014). What can recent replication failures tell us about the theoretical commitments of psychology? Theory and Psychology, 1-14.

Abstract

I suggest that the recent, highly visible, and often heated debate over failures to replicate results in the social sciences reveals more than the need for greater attention to the pragmatics and value of empirical falsification. It is also a symptom of a serious issue—the under-developed state of theory in many areas of psychology.

Krause, M.S. (2012). Measurement validity is fundamentally a matter of definition, not correlation. Journal of General Psychology, 16, 4, 391-400.

Abstract

….However, scientific theories can only be known to be true insofar as they have already been demonstrated to be true by valid measurements. Therefore, only the nature of a measure that produces the measurements for representing a dimension can justify claims that these measurements are valid for that dimension, and this is ultimately exclusively a matter of the normative definition of that dimension in the science that involves that dimension. Thus, contrary to the presently prevailing theory of construct validity, a measure’s measurements themselves logically cannot at all indicate their own validity or invalidity by how they relate to other measures’ measurements unless these latter are already known to be valid and the theories represented by all these several measures’ measurements are already known to be true….This makes it essential for each basic science to achieve normative conceptual analyses and definitions for each of the dimensions in terms of which it describes and causally explains its phenomena.

Krause, M.S. (2013). The data analytic implications of human psychology’s dimensions being ordinally scaled. Journal of General Psychology, 17, 3, 318-325.

Abstract

Scientific findings involve description, and description requires measurements on the dimensions descriptive of the phenomena described. …Many of the dimensions of human psychological phenomena, including those of psychotherapy, are naturally gradated only ordinally. So descriptions of these phenomena locate them in merely ordinal hyperspaces, which impose severe constraints on data analysis for inducing or testing explanatory theory involving them. Therefore, it is important to be clear about what these constraints are and so what properly can be concluded on the basis of ordinal-scale multivariate data, which also provides a test for methods that are proposed to transform ordinal-scale data into ratio-scale data (e.g., classical test theory, item response theory, additive conjoint measurement), because such transformations must not violate these constraints and so distort descriptions of studied phenomena.

What these papers identify is that:

  1. We must start with good theory building and the theory must be deep and wide enough to enable the theory to be tested and falsified.
  2. That construct validity is indeed important but correlations between tests are not enough. We need agreement on the meaning of attributes (such as the Big Five).
  3. That treating comparative data (such as scores on a normal curve) as if it were rigorous measurement is at best misleading and at worst fraud.

So where does this leave the discipline? Again, as is the theme threading through all these myths, we must embrace the true scientist/practioner model and recognise that our discipline is a craft. To overly rely on quantitative techniques is actually extremely limiting for the discipline and we need alternative ways of conceptualising ‘measurement’. In this regard I’m a big fan of the evaluation literature (e.g. Reflecting on the past and future of evaluation: Michael Scriven on the differences between evaluation and social science research) as providing alternative paradigms to solve I/O problems.

We must at the same time embrace the call for better theory building. If I/O Psychology, and psychology in general, is going to have valuable contributions to the development of human thought it will start with good, sound theory. Just putting numbers to things does not constitute theory building.

When using numbers we must also look for alternative statistical techniques to support our work. An example is Grice’s (2011): Observation Oriented Modelling: Analysis of cause in the behavioural sciences.  I looked at this work when thinking about how we assess reliability (and then statistical demonstrate it) and think it has huge implications.

Finally, when using numbers to substantiate an argument, support theory, or find evidence for an intervention we need to be clear on what they are really saying. Stats can lie and at best mislead and we must be clear as to what we are and are not saying, as well as the limitation in any conclusions we draw from a reliance on data. To present numbers as if they that had measurement robustness is simply wrong.

In the next blog I want to discuss the myth of impartiality and why these myths continue to pervade the discipline.

Acknowledgement: I would like to acknowledge Professor Paul Barrett for his thought leadership in this space and opening my eyes to the depth of measurement issues we face. Paul brought to my attention the articles cited and I’m deeply grateful for his impact on my thinking and continued professional growth.

2014: Exploring the Myths of I/O Psychology a Month at a Time

For those that may not be aware, the ‘Science of Science’ is in disarray. Everything is currently under the microscope as to what constitutes good science, what is indeed scientific and the objectivity and impartiality of science. This is impacting many areas of science and has even led to a Noble Prize winner boycotting the most prestigious journals in his field.

Nobel winner declares boycott of top science journals: Randy Schekman says his lab will no longer send papers to Nature, Cell and Science as they distort scientific process.

This pervading problem in the field of science is perhaps best covered in the highly cited Economistarticle ’How Science Goes Wrong’

This questioning of science is perhaps no more apparent than in our discipline of I/O Psychology. Through various forums, and academic and non-academic press, I have been made increasingly aware of the barrage of critical thinking that is going on in our field. The result: much of what we have taken to be true as I/O psychs is nothing more than fable and wishful thinking.

Over this year I want to explore one myth each month with readers of our blog. They will be myths about the very heart of I/O Psychology that are often simply taken as a given.

The idea of attacking myths has long been central to OPRA’s philosophy:

 And there are many myth busting blogs in this forum:

To kick off this new series I wish to start with the current state of play in the field. In particular, the fundamental problem that questionable research practices often arise when there is an incentive for getting a certain outcome.

John, L.K., Loewenstein, G., & Prelec, D. (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science OnlineFirst.  A description of the study with comments at: http://bps-research-digest.blogspot.co.nz/2011/12/questionable-research-practices-are.html

This results in a well-known fact to all in the publish or perish game that your best chance of getting published is not necessarily the quality of the research but rather is correlated with a null hypothesis not being supported (i.e. you have a ‘eureka’ moment, however arbitrary).

Fanelli, D. (2010) “Positive” results increase down the hierarchy of the sciences. http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0010068

Gerber, A.S., & Malhotra, N. (2008) Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociological Methods & Research, 37, 1, 3-30.

The output is that the bulk of the research in our area is trivial in nature, is not replicated and simply does not support the claims that are being made. This is especially the case in psychology where the claims often go from exaggerated to the absurd.

Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology (http://dx.doi.org/10.1016/j.jmp.2013.02.003 ), Earlyview, , 1-17.

Scientific methods of investigation offer systematic ways to gather information about the world; and in the field of psychology application of such methods should lead to a better understanding of human behavior. Instead, recent reports in psychological science have used apparently scientific methods to report strong evidence for unbelievable claims such as precognition. To try to resolve the apparent conflict between unbelievable claims and the scientific method many researchers turn to empirical replication to reveal the truth. Such an approach relies on the belief that true phenomena can be successfully demonstrated in well-designed experiments, and the ability to reliably reproduce an experimental outcome is widely considered the gold standard of scientific investigations. Unfortunately, this view is incorrect; and misunderstandings about replication contribute to the conflicts in psychological science. …… Overall, the methods are extremely conservative about reporting inconsistency when experiments are run properly and reported fully.

The paucity of quality scientific research is leading to more and more calls for fundamental change in how what qualifies as good science and research in our field.

Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K., Gerber, A., Glennerster, R.,Green, D., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B.A., …, Simonsohn, U., & Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343, 6166, 30-31.

“There is growing appreciation for the advantages of experimentation in the social sciences. Policy-relevant claims that in the past were backed by theoretical arguments and inconclusive correlations are now being investigated using more credible methods. ….Accompanying these changes, however, is a growing sense that the incentives, norms, and institutions under which social science operates undermine gains from improved research design. Commentators point to a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results are published more easily than null, replication, or perplexing results”.

D.C., Levine, J.M., Mackie, D.M., Morf, C.C., Vazire, S., & West, S.G. (2013). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice. Personality and Social Psychology Review, EarlyView, , 1-10.

In this article, the Society for Personality and Social Psychology (SPSP) Task Force on Publication and Research Practices offers a brief statistical primer and recommendations for improving the dependability of research. Recommendations for research practice include (a) describing and addressing the choice of N (sample size) and consequent issues of statistical power, (b) reporting effect sizes and 95% confidence intervals (CIs), (c) avoiding “questionable research practices” that can inflate the probability of Type I error, (d) making available research materials necessary to replicate reported results, (e) adhering to SPSP’s data sharing policy, (f) encouraging publication of high-quality replication studies, and (g) maintaining flexibility and openness to alternative standards and methods. Recommendations for educational practice include (a) encouraging a culture of “getting it right,” (b) teaching and encouraging transparency of data reporting, (c) improving methodological instruction, and (d) modeling sound science and supporting junior researchers who seek to “get it right.”

Cumming, G. (2013). The New Statistics: Why and how. Psychological Science, EarlyView, , 1-23.

We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include pre-specification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.

But it is not all doom and gloom. There are simple steps that the Scientist/Practitioner can take to make sure that sense and sensibility is more pervasive in the field.  In this regard I offer 3 key simple principles:

  1. Try your best to keep up to date with the literature: OPRA will do their best to publish relevant pieces that come to their attention via this blog!
  2. Don’t make exaggerated claims: Remember that no one has ‘magic beans’ as ‘magic beans’ do not exist. Dealing with human problems invariably involves complexity and levels of prediction that are far from perfect.
  3. Accept our discipline is a craft not a science: I/O Psychology involves good theory, good science, and sensible qualitative and quantitative evidence but is often applied in a unique manner, as would a craftsman (or craftsperson – if such a word exists). Accepting this fact will liberate the I/O Psychologist to use science, statistics and logic to produce the solutions that the industry, and more specifically, their clients require.

Keep an eye on our blog this coming year for exploring myths and other relevant information or products related to our field. Let us know if something is of interest to you and we can blog about it or send you more information directly.