Category Archives: Scientist – Practitioner

Is Competition good for Science?

I have been a strong supporter of Capitalism. I believe in free trade, unbridled competition, and the consumer’s right to make choices in their self-interest. Laissez-faire capitalism, and the competition that it breeds, I often see as key to well-functioning economies and competition is essential to good long-term solutions without exception.

As noted I have held this view for a long time, and without exception, but recently I have been deeply challenged as to whether this model is applicable to all pursuits. In particular I am questioning whether competition is truly good for science.  This is not a statement I make lightly and is made after much reflection on the discipline and the nature of the industry I work, both as lecturer and a practitioner of I/O psychology.

There is a growing uprising against what many perceive as the management takeover of universities. This open source article ‘The Academic Manifesto’ speaks of this view and its opening paragraph captures the essence of the article:

“… The Wolf has colonised academia with a mercenary army of professional administrators, armed with spreadsheets, output indicators and audit procedures, loudly accompanied by the Efficiency and Excellence March. Management has proclaimed academics the enemy within: academics cannot be trusted, and so have to be tested and monitored, under the permanent threat of reorganisation, termination and dismissal…”

While I can certainly see efficiencies that can be made in universities and that the need for accountability is high, I can’t help but agree with the writers that the current KPIs don’t meet the grade (no pun intended). The ‘publish or perish’ phenomena works counter to producing quality research that is developed over the long-term.

Competition also leads to a lack of valuable, but not newsworthy, research. This topic has also been discussed previously in this blog (the-problem-with-academia-as-a-medium-of-change-or-critique), but the key issue of replication that is at the heart of our science is sorely lacking (Earp BD and Trafimow D (2015) Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621).

We have created new terms such as HARKing that describe how we have moved away from hypothesis testing, which is central to science, and into defining hypotheses only after the results are in (Bosco, F. A., Aguinis, H., Field, J. G., Pierce, C. A., & Dalton, D. R. (in press). HARKing’s threat to organizational research: Evidence from primary and meta-analytic sources. Personnel Psychology.)

Likewise the increased growth in universities, and the competition between them, without a growth in jobs is being questioned in many countries. When a degree simply becomes a means to an end, does it provide the well-rounded educated population that is required to have a fully functioning progressive Society?

At a practitioner level, the folly of competition is perhaps most apparent in the likes of psychometric testing; an industry I’m acutely familiar with. Test publishers go to great lengths to differentiate themselves so as to carve a niche in the competitive landscape (are-tests-really-that-different) . This is despite the fact that construct validity, which is the centre piece of modern validity theory, in essence requires cross validation.  The result is a myriad of test providers sprouting the “mine is bigger than yours” rhetoric at the detriment of science.  Many times users are more concerned about the colour used in reports than about the science and validity of that test.

Contrast this with a non-competitive approach to science. The examples are numerous, but given the interest in psychology take, as an example, the Human Brain project. Here we have scientists collaborating around a common goal towards a target date of 2023. 112 partners in 24 countries and the driver is not competition but the objective itself of truly expanding our knowledge of the human brain.

We have the US equivalent which called the Brain Initiative and there is further collaboration to create the combined efforts of these two undertakings. With the advancements in physics that has given rise to brain scanning technology, we now understand more than ever about the processes of the mind. This simply would not be possible under the competitive model applied to science.

My experience as a practitioner selling assessment and consulting solutions, as a lecturer who has taught across multiple universities and as a general science buff, have led me to see the downside of competition for science. Competition still has a place in my heart, but perhaps like chardonnay and steak their value may not always be realised when combined.

The Myth of Impartiality: Part 1

In last month’s post I signed off by noting that impartiality was a pervasive myth in the industry. The corollary is that assuming impartiality allows many of the myths in the industry to not only continue but flourish. Very few in the industry can lay claims to being completely impartial, yours truly included. The industry at all levels has inherent biases that any critical psychologist must be mindful of. The bias starts at with university and research and then the myth is passed, often by practioners on to the consumer (be that person or organisation).

A colleague recently sent me a short paper that I think is compulsory reading for anyone with a critical mind in the industry. The article uses the metaphor of Dante’s Inferno to discuss the demise of science. Keeping with the theme, I would like use another biblical metaphor of the Four Horseman of the Apocalypse in reference to the myth of impartiality. These Horsemen represent the four areas where impartiality is professed but often not practiced, resulting in a discipline that fails to deliver for its followers the Promised Land being touted. The Four Horsemen in this instance are: University, Research, Practioners, and Human Resources.

Unlike the biblical version, destiny is in our hands and I want to continue to present solutions rather than simply highlight problems. Thus, each of the Four Horseman of impartiality can be defended (or at least be inflicted with a flesh wound) with some simple virtuous steps that attack the myth of impartiality. Sometimes these steps require nothing more than acknowledging that the science and practice of psychology is not impartial. Other times we are called to address the impartiality directly. Because of the length of the topic, I will break this into two blogs for our readers.

 

Universities

Many universities are best thought of as corporations. Their consumers are students. Like any other corporation they must market to attract consumers (students) and give students what they want (degrees). To achieve this end a factory type process is often adopted; which in the world of education often means teaching, and having students repeat and apply rules. Moreover, students want to at least feel that they are becoming educated and numbers and rules provide this veil. Finally, the sheer complexity of human behaviours means that restrictive paradigms for psychology are adopted in opposition to a deep critical analysis of the human condition. This in turn gives the much-needed scale required to maximise the consumer base (i.e. easy to digest product, respectability, capacity to scale the production (education) for mass consumption).

 

For this reason Psychology is often positioned purely as a science, which it is not. This thinking is reinforced by an emphasis on quantitative methodologies reinforcing the myth of measurement. Papers are presented without recognising the inherent weaknesses and limitations of what is being discussed. Quality theoretical thinking is subordinated to statistics. The end result is that while university is presented as an impartial place of learning, this ignores the drivers for impartiality that are inherent in the system. Often the rules of learning that are created to drive the learning process do so to meet the needs of the consumer and increase marketability and the expense of impartial education. Those who come out of the system may fail to fully appreciate the limitations in their knowledge, and as the saying goes ‘a little knowledge is dangerous’.

 

University is the most important of the Four Horsemen of impartiality as it is within university that many of the other myths are generated. By training young minds in a way of thinking and appearing impartial, universities create ‘truths’ in the discipline that are simply a limited way of viewing the topic. This results in myths like the myth of measurement (and various conclusions drawn from research), that become accepted as truth and students graduate with faulty information or over confidence in research findings. Those who do not attend university, but hold graduates with a degree of esteem, likewise fail to understand that they are now also victims of a myth of impartiality.

 

The virtuous steps

 This blog is too short to address all the shortcomings of universities in the modern environment. However if we don’t, we will lose more and more quality researchers and teachers from our ranks [see: http://indecisionblog.com/2014/04/07/viewpoint-why-im-leaving-academia/]. What I suggest is that Psychology re-embrace its theoretical roots by being more multi-discipline in its approach, incorporating science and statistics with the likes of philosophy and sociology.

 

The second step is to make compulsory a course in ‘Critical Psychology’. This would in turn go beyond the sociopolitical definition of critical psychology often given and focus on issues of critique as discussed in these blogs. These would include: issues of measurement, the role of theory, the problems of publish or perish, etc. In short, a course that covers the problems inherent in the discipline; acknowledging that these are things that every psychologist, applied or researching, must be mindful of. For the Universities already taking these steps in a meaningful way I commend you.

Research

The idea that research is impartial has been dismissed some time ago by all but the most naïve. The problem is not so much one of deliberate distortion, although this can be problematic also as we will see later on. Rather it is the very system of research that is not impartial.

Firstly there is the whole ‘publish or perish’ mentality that pervades all those that conduct research, whether academics or applied psychologists. Researchers are forced by market drivers or university standards to publish as much as possible as ‘evidence’ that we are doing our job. The opportunity cost is simply that quality research is often in short supply. For one of the best summaries of this problem I draw your attention to Trimble, S.W., Grody, W.W., McKelvey, B., & Gad-el-Hak, M. (2010). The glut of academic publishing: A call for a new culture. Academic Questions, 23, 3, 276-286. There are many powerful points made in this paper and some of the key points are that quality research takes time and is counter to the ‘publish or perish’ mentality. Moreover, a real contribution often goes against conventional wisdom and therefore puts one in the direct firing of many current contemporaries.

Why does this glut occur? I can think of three key reasons.

The first is that researchers are often graded by the quantity, not quality, of the work they produce. The general public tends not to distinguish between grades of journals, and academic institutions have key performance indicators that require a certain number of publications per year.

The second reason is that journals create parameters by which research will be accepted. I have discussed this topic to death in the past, but evidence of bias include favouring novel findings to replication, favouritism to papers that reject the null hypothesis, and numbers as the criteria of supporting evidence over logic and theory. This in turn creates a body of research that projects itself as the body-of-knowledge in our discipline when in reality it is simply a fraction, and distorted fraction at that, of how we understand human complexity (c.f. Francis, G. (2014). The frequency of excess success for articles in Psychological Science. Psychonomic Bulletin and Review (In Press; http://www1.psych.purdue.edu/~gfrancis/pubs.htm ),1-26).

Abstract: Recent controversies have questioned the quality of scientific practice in the field of psychology, but these concerns are often based on anecdotes and seemingly isolated cases. To gain a broader perspective, this article applies an objective test for excess success to a large set of articles published in the journal Psychological Science between 2009-2012. When empirical studies succeed at a rate higher than is appropriate for the estimated effects and sample sizes, readers should suspect that unsuccessful findings were suppressed, the experiments or analyses were improper, or that the theory does not properly account for the data. The analyses conclude problems for 82% (36 out of 44) of the articles in Psychological Science that have four or more experiments and could be analyzed.

The third reason is funding. Where money is involved there is always a perverse incentive to distort. This occurs in universities where funding is an issue, and through industry where a psychologist may be brought in to evaluate such an intervention. The reasons are obvious and are often more subtle than straight distortion. For example, universities that require funding from certain beneficiaries may be inclined to undertake research that, by design, returns positive findings in a certain area, thus being viewed positively by grants committees. The same may be true in industry where an organisational psychology company is asked to evaluate a social programme but the terms of the evaluation are such that the real negative findings (such as opportunity cost) are hidden. This had led to calls for transparency in the discipline, such as in Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K., Gerber, A., Glennerster, R.,Green, D., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B.A., …, Simonsohn, U., & Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343, 6166, 30-31. While the paper makes a strong argument for quality design it also notes the trouble with previse incentives:

Accompanying these changes, however, is a growing sense that the incentives, norms, and institutions under which social science operates undermine gains from improved research design. Commentators point to a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results are published more easily than null, replication, or perplexing results ( 3, 4). Social science journals do not mandate adherence to reporting standards or study registration, and few require data-sharing. In this context, researchers have incentives to analyze and present data to make them more “publishable,” even at the expense of accuracy. Researchers may select a subset of positive results from a larger study that overall shows mixed or null results (5) or present exploratory results as if they were tests of pre-specified analysis plans (6).

Then there are the outright frauds (see: http://en.wikipedia.org/wiki/Diederik_Stapel). For those who have not read this in other blogs I urge you to look at this New York Times interview. My favourite quote:

“People think of scientists as monks in a monastery looking out for the truth,” he said. “People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”… What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman…

 

The virtuous steps

 To address this issue of impartiality of research we need a collective approach. Universities that have a commitment to research must aim for quality over quantity and allow researchers the time to develop quality research designs that can be tested and examined over longer periods. Research committees must be multi-disciplinary to make sure that a holistic approach to research prevails.

We must have arm’s-length between funding and research. I don’t have an answer for how this would occur, but until it does universities will be dis-incentivised to conduct fully impartial work. Journals need to be established that provide an outlet for comprehensive research. This will see a removal of word limits in favour of comprehensive research designs that attempt to cover more for alternative hypothesis to be tested and dismissed. Systems thinking needs to become the norm and not the exception.

Finally, and most importantly, our personal and professional ethics must be paramount. We must contribute to the body of knowledge that is critiquing the discipline for the improvement of psychology. We must make sure that we are aware of any myth of impartiality in our work and make this explicit while trying to limits its effect on our work; whether it is as a researcher or applied. We must challenge the institutions (corporate and universities) we work for to raise the game, in incremental steps

In Part Two, I will take a critical look at my industry, psychometric testing and applied psychology, and how the myth of impartiality is prevalent. I will also discuss how this is then furthered by those who apply our findings within Human Resource departments.

What is Ethical Behaviour?

In our work and personal lives most of us make hundreds of decisions every week, many of which involve some degree of deciding the ‘right’ thing to do. As such, we are often faced with complex situations in which we have to determine the most ethical way to proceed.  But what is ethical behaviour? I think most of us would agree that in the majority of situations there is not simply one ‘right’ and one ‘wrong’ way of doing things. In fact, most situations we face are very complex and involve multiple factors we need to consider in order to decide on a course of action that we see as ‘ethical’. So how do we make these complex decisions?

Often it is simpler than you might think. While there are many models of ethical decision-making which outline a step by step process of how to make an ethical decision, the reality is most of us would struggle in many situations to find the time and resources to engage in such a process. So, how do we make these decisions? The reality is that most of us are experts in what we do. We have worked 40+ hours in our jobs for a number of years, and have generally gained some form of expertise. As a result, when we are faced with these types of complex decisions, where there is often limited information and time with which to choose a course of action, we often use a degree of expert intuition, and may not seem to engage in a rational, step by step process as to deciding the best way to behave. So does this mean we are cutting corners? Not necessarily.

There is a time and a place for both a logical, step-by-step process of making ethical decisions, as well as a more intuitive, “what do I think is right”, process. Sometimes making ethical decisions will involve a step-by-step process, by which you consider each possible course of action and their potential consequences, weighing up the best way forward. These occasions tend to be when both time and resources are sufficient, when you need to justify your decision to someone of higher status such as a professional board, or when you are new to an area of work and have not yet developed expertise. On the other hand, when you are an expert, you do not have all of the information available, time is limited, and there are a number of factors to be considered, relying on our intuition is more likely to occur.

Whichever process you engage in, there are a range of factors that may impact on the course of action you choose. Are there things about yourself (e.g. age, gender, educational background) that might impact on what you define as ethical behaviour? Are there situational factors such as the organisation you work for that could play a part? All of us could view the same situation, engage in a rational decision-making process, and still come to a different conclusion about what is the most ethical way to behave. So, the key is knowing yourself, understanding your situation, and taking a moment to consider how these factors might impact on the course of action you take when deciding the ‘right’ thing to do. Why not take that moment now?

Never Forget Your Occam’s Razor When Travelling!

In my previous role in the UK, I was often confronted by very complex measures of psychological traits. This included the likes of multi-faceted competency models, complex appraisal forms, and measures of engagement with more scales than a grand piano.

Having factor analysed the results of many of these models, I can say that I rarely see them hold up. What does result is a far simpler structure of a few key constructs that account for most of the variability in job performance. I’m always reminded of the law of parsimony which results in simple models often being the finest.

In I/O psychology, we have examples where this is the case. Professor Paul Barrett, one of the most influential people in my career, was instrumental in creating a single psychometric tool that, while never fully commercialised, was a real innovation in our field.

For those that may have forgotten the role of parsimony, I draw your attention to some older papers that are often forgotten in this field. These indicate that simplicity will be more beneficial than complexity when measuring human behaviour.

Scarpello and Campbell (1983) in Personnel Psychology looked at whether a single item (1-5 scale) global measure of job satisfaction was equivalent to the sum of facet satisfactions. They concluded that the whole is more complex than the sum of the parts, and may in fact be more inclusive than facet measures.

Wanous, Reichers, and Hudy (1997) in the Journal of Applied Psychology evaluated single item measures of job satisfaction and concluded that they can be used instead of facet measures in some instances, including practical considerations of face validity, cost, and time. They suggested test-retest reliability of .70. A subsequent article by Wanous and Hudy (2001) in Organisational Research Methods confirmed this, looking at teacher effectiveness but with many of the same arguments.

Playing Games Makes You A Better Person?

So often in society obvious conclusions are drawn for complex interactions. A classic case of this is the supposed rise in violent crime and video games. Firstly the rise in crime is indeed far more a media phenomena than a reality. In many countries crime is decreasing and certainly if looked at in a historical context is far less than was the case previously. The relationship with violent video games is clearly contentious.

The debate over the crime rates notwithstanding the counter logic of video games is something that until recently had not been tested, namely do pro-social video games result in pro-social behaviour? A study which was reviewed in the Economist (2009) and published in the Journal of Experimental Social Psychology demonstrated that pro-social video games can indeed lead to pro-social behaviour. This was further supported by correlative research outside the laboratory looking at behaviour in gamers from Asia. Those involved in pro-social games were more likely to help, share and empathise than those involved in more violent and self-serving games. This finding is supported by work by Greitemeyer and Osswald which found that the positive impact of pro-social gaming is independent of whether someone is a ‘nice person’ (i.e. pro-social games produce positive behaviour independent of the person’s inclination to being nice).

In reading this article my thoughts naturally drifted toward business applications. Given the rise in consciousness around psychotic managers and bullying behaviour at work a suitable intervention may well be pro-social video games. Due to the subtle manner in which the games may affect cognition, the potential for video games to have a more positive impact on behaviour at work than perhaps a classic training intervention may indeed be something to explore.

New Zealand: The Home Of Culturally Appropriate Testing

New Zealand is a fantastic country. This statement will come as no surprise to many but it is often taken for granted by us Kiwis as to what a great place to both live and work New Zealand is.

One of the many things that make New Zealand great is the Treaty of Waitangi and the relationship between Maori and non-Maori which is integral to New Zealand legislation. In this regard we are the envy of the world and a shining light of proactively working towards a unified country that truly gives political and economic power to the indigenous people of the land.

I contrast this for example with other countries such as Australia that have a poor record with Aborigines. Moreover, New Zealand’s attitude of integrating ethnicities while maintaining their identity is somewhat unique in the world. For example, in France it is forbidden by law to collect statistics referring to ‘racial or ethnic origin’.

As an I/O psychologist ethnicity and understanding differences across ethnicity is a vital part of our role. I personally have been involved in looking at adverse impact of ethnicity on cognitive ability and personality assessments and see this as a crucial part of being an ethical psychologist. In New Zealand this is demanded by all organisations that are committed to maintaining testing standards. If we contrast this to Europe, and the French example, we can once again see just how far ahead a country like New Zealand really is when it comes to the discipline of I/O psychology.

A Big Theory

For my last blog on the psychological articles in the Economist I would like to draw people’s attention away from I/O Psychology and into more fundamental science. For those that don’t know, Stephen Hawkins is attempting to find a unifying theory of the universe that connects both theories of the very large (such as gravity) with the theories that we use to explain the very small (e.g. quantum physics).  The search for a unifying theory of physics is a holy grail and reminds me how far psychology let alone I/O needs to develop to become more than merely a collection of ‘random studies’.

One attempt at such a theory however, is progressing by researchers such as Dr Lichtman and Dr Brenner and others in the emerging science of connectomics (Economist April 11, 2009). Connectomics is the study of nerve cells and the connections between them. The goal is to get a complete circuit diagram of the brain so that the most complicated object in the known universe will be better understood.

In looking at the scale of this type of project I’m completely in awe of what must be considered a far more fundamental science than my own discipline. I’m also reminded of the hotchpotch of science that is I/O psychology and the gap between our discipline and real breakthroughs in scientific discovery. Uncovering that bright people who work hard perform better at work is hardly likely to make it in to the next edition of ‘Nature’ but is seen as a profound and far finding in I/O Psychology.

Applied psychology has been all of my working life and I would not change this for anything. I would however suggest that we need to be thinking like real scientists and looking for our ‘systems’ if we are really going to unlock the secrets of human behaviour at work.

That was my last blog in the series of Economist articles related to I/O. Next time you are waiting at an airport or in a magazine store I would like to strongly suggest that you have a read of the current Economist which is guaranteed to have at least one gem of knowledge applicable to I/O psychology.

How Scientific is Peer Review

In my contemplation of the purity of scientific pursuit, I came across an article by Frank Furedi (Science’s peer system needs a review, The Australian, 20 February 2010) that highlights a myth that is commonly known to researchers and central to the mysticism of science. The myth: That the esoteric peer review system is neither impartial nor independent, and is also, often a major hindrance to research that can really expand disciplines.

Furedi highlights a range of problems including:
1.       Scientists can use the editorial process to slow down publication of views that counter their own.
2.       The review process stifles innovative methodologies outside the commonly accepted paradigms for that discipline.
3.       Rivals are often not best placed to critique the work of others.
4.       The peer review is often a ‘mates club’ of mutually accepted publications among journal editors and friends.
5.       Advocacy science often leads to publication of articles based on perceived societal impact, not scientific merit.
6.       Peer review creates a standard which stops free debate through claims that anything that is not peer reviewed is not valuable.

This short article accompanies a growing body of work questioning the robustness of the scientific industry:

Charlton, G. (2009). Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity. Medical Hypotheses, 72, 3, 237-243. 

Horrobin, D.F. (2001). Something Rotten at the Core of Science? Trends in Pharmacological Sciences, 22(2), 51-52.

This should not be interpreted to mean that peer review should be discarded. Lack of peer review can lead to damning consequences. An example is of the editor of Medical Hypotheses—an oddity in the world of scientific publishing because it does not practice peer review—who lost his job over the publication of a paper that said that HIV does not cause AIDS. Bruce Charlton, who succeeded the founder of the journal, David Horrobin (yes, one in the same!) in 2003, decided what got published on his own—although he occasionally consulted with another scientist—and manuscripts were only very lightly edited.

The point is that peer review is not a stamp of credibility. Nor is it inherently good for science or guarantees that science will be good. The quality of science, or rather scientific work, is far more dependent on the quality of the logic and the simplicity and elegance of the supporting evidence (statistical or not). With this in mind, I draw your attention to a reference to a recent short piece by Christopher Peterson and Nansook Park in The Psychologist, May 2010, 23(5):

Abstract
A special issue of Perspectives on Psychological Science, published by the American Psychological Society, invited opinions from a variety of psychologists, including us (Diener, 2009). Our advice was to keep it simple (Peterson, 2009). We offered this advice not because simplicity is a virtue, although it is (Comte-Sponville, 2001). Rather, the evidence of history is clear that the research studies with the greatest impact in psychology are breathtakingly simple in terms of the questions posed, the methods and designs used, the statistics brought to bear on the data, and the take-home messages.

Simplicity, logic and empirical data are the foundation of quality science, not peer acceptance.

Job Satisfaction and Work Productivity

A current hot topic in the world of I/O is the relationship between happiness at work and work productivity. Everyone can see the ‘human-benefit’ of having a happy workplace, and the idea that increased job satisfaction and a harmonious workplace are inherently good things makes obvious sense.

However the link between job satisfaction and work productivity is far less clear. Historically, the relationship has been less definitive. Vroom (1964) reported an average correlation of .14 between satisfaction and performance.  Iaffaldano & Muchinsky (1985) backed this up, suggesting that the average correlation was around .15.  More recently, a definitive meta-analysis by Tim Judge and his colleagues (2001) reported an uncorrected average correlation of .18, and a corrected correlation of .30. These studies have fuelled the argument that there is little association between job satisfaction and job performance, at least at the individual level.

In a keynote address at the NZPsS conference, Cynthia Fisher from Bond University (Queensland) delivered an update on this view. (Thanks to Professor Michael O’Driscoll, Waikato University, for posting a summary of the address on I/O Net). Cynthia presented an array of recent evidence which might cause us to rethink some of the assumptions we have made about the relationship between satisfaction and performance. The summary appears to be that while job satisfaction may have limited impact on task performance, it does lead to a higher level of contextual performance (e.g. positive work behaviours); attitudes do matter in terms of people’s job performance.

While important, I believe this sidesteps the core questions organisations often ask, like ‘will this increase our revenue?’ Recent attempts to link job satisfaction to productivity have likewise sidestepped this intrinsic question. A highly functioning workplace is not the same as one that is generating large profits for shareholders.

Another keynote address (UK) was brought to my attention by Professor Paul Barrett:

Peccei, R. (2004) Human Resource Management and the search for the happy workplace. Inaugural Addresses Research in Management Series: Erasmus Research Institute of Management: http://publishing.eur.nl/ir/repub/asset/1108/EIA-2004-021-ORG.pdf, 0-0.

Abstract
The analysis of the impact of human resource (HR) practices on employee well-being at work is an important yet relatively neglected area of inquiry within the field of human resource management (HRM). In this inaugural address, the main findings from ongoing research based on data from the 1998 British Workplace Employee Relations Survey (WERS98) are presented. These suggest that the HR practices that are adopted by organisations have a significant impact on the well-being of their workforces and that this impact tends, on the whole, to be more positive than negative. The effects, however, are more complex than is normally assumed in the literature. In particular, preliminary results indicate that the constellation of HR practices that help to maximise employee well-being (i.e. that make for happy workplaces), are not necessarily the same as those that make up the type of ‘High Performance Work Systems’ commonly identified in the literature. This has important theoretical, policy and ethical implications for the field of HRM. These are discussed along with important directions for future research.

Like many areas of I/O psychology, the relationship between job satisfaction and work performance is a complex system. Our research in the area is at times clichéd and like many so-called great findings in psychology, are occasionally nothing more than old-fashioned common sense. A classic example of this is a paper published last year:

Harter, J.K., Schmidt, F.L., Asplund, J.W., Kilham, E.A., & Agrawi, S. (2010) Causal impact of employee work perceptions on the bottom line of organizations. Perspectives on Psychological Science, 5(4), 378-389.

Selective paragraphs from the abstract border on the obvious and demonstrate how far our discipline needs to go if it is going to come to an understanding of complex relationships. Some of the more ‘insightful’ comments include:

• ‘Perceptions of work conditions have proven to be important to the well-being of workers’.
• ‘Customer loyalty, employee retention, revenue, sales, and profit are essential to the success of any business’.
• ‘Managerial actions and practices can impact employee work conditions and employee perceptions of these conditions, thereby improving key outcomes at the organizational level’.

As is often the case, a truly insightful discussion on this topic is not found in psychological literature but in an issue of The Economist (July 2010). An article noted that wellness programmes are now part of the corporate landscape with more than half of America’s largest companies offering smoking and fitness programmes. Over a third have gyms and canteens are termed ‘nutritional centres’. This focus on wellness is also extending to mental health programmes. This is driven by two fronts: Doctors note that over a third of health related issues they see have a psychological base, and management gurus are now talking much more about the psychological impacts of the modern workforce. The article goes on to argue that the rationale for these interventions is as much financial as it is psychological. Mental health has been estimated to cost British employers $26 billion a year. American research suggests presentism (being at work but not really functioning i.e. present) costs twice as much as absenteeism.

Taking the argument one step further, The Economist states that the job satisfaction and work performance equation requires deeper analysis. What does this body of work mean for the relationship between the private and public distinction for employers? How much responsibility should an employer take for staff wellbeing? What is the scientific basis for many of the interventions? If employers are to introduce measures to improve wellness, how do they know what is most likely to bring about the desired outcomes? Is the focus on wellness necessarily good for productivity?

The relationship between job satisfaction and worker well-being and work performance is complicated. Talent is often at extremes of the bell curve and does not always naturally fit the classical model of wellness. Understanding the various interactions is a valid line of research for psychology but one that will not be short circuited by clichés. We must never lose sight of the core function of business – which is to make, distribute and reinvest profit – and job satisfaction must impact this core function if it is to be a meaningful psychological construct embraced by the industry.

Is Meta-Analysis All it is Cracked Up to Be?

As a student of psychology, I was taught that meta-analysis exceeded all other forms of research. However, his view has been brought into question by a series of papers such as: Hennekens, C.H., & DeMets, D. (2009). The need for large-scale randomized evidence without undue emphasis on small trials, meta-analyses, or subgroup analyses. Journal of the American Medical Association, 302(21), 2361-2362.

Epidemiologist Charles Hennekens and biostatistician David DeMets have pointed out that combining small studies in a meta-analysis is not a good substitute for a single trial, sufficiently large enough to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding variables.”

Pace, V.L., & Brannick, M.T. (2010). How similar are personality scales of the “same” construct? A meta-analytic investigation. Personality and Individual Difference,49, 669-676. 

Abstract                                                                                                                                       An underlying assumption of meta-analysis is that effect sizes are based on commensurate measures. If measures across studies do not have the same empirical meaning, then our theoretical understanding of relations among variables will be clouded. Two indicators of scale commensurability were examined for personality measures: (1) correlations among different scales with similar labels (e.g., different measures of extraversion) and (2) score reliability for different scales with similar labels. First, meta-analyses of correlations between many commonly used scales were computed, both including and excluding scales classified as non-Five-Factor Model measures. Second, subgroup meta-analyses of reliability were examined, with specific personality scales as moderators. Results reveal that assumptions of commensurability among personality measures may not be entirely met. Whereas meta-analyzed reliability coefficients did not differ greatly, scales of the ‘‘same” construct were only moderately correlated in many cases. Some improvement to this meta-analytic correlation occurred when measures were limited to those based on the Five-Factor Model. Questions remain about the similarity of personality construct conceptualization and operationalization. 

Levine, T., Asada, K.J., & Carpenter, C. (2009). Sample sizes and effect sizes are negatively correlated in meta-analyses: Evidence and implications of a publication bias against non-significant findings. Communication Monographs, 76(3), 286-302.

 Abstract                                                                                                                                                                          Meta-analysis involves cumulating effects across studies in order to qualitatively summarize existing literatures. A recent finding suggests that the effect sizes reported in meta-analyses may be negatively correlated with study sample sizes. This prediction was tested with a sample of 51 published meta-analyses summarizing the results of 3,602 individual studies. The correlation between effect size and sample size was negative in almost 80 percent of the meta-analyses examined, and the negative correlation was not limited to a particular type of research or substantive area. This result most likely stems from a bias against publishing findings that are not statistically significant. The primary implication is that meta-analyses may systematically overestimate population effect sizes. It is recommended that researchers routinely examine the n x r scatter plot and correlation, or some other indication of publication bias and report this information in meta-analyses. 

While not entirely refuting the value of meta-analysis, these papers once again draw into contention commonly held views within our discipline. Moreover, they demonstrate that sophisticated data-combining methodologies are no substitute for a quality large-scale study, and assuming so may lead one to erroneous conclusions.