Tag Archives: research

The myth that Criterion related validity is a simple correlation between test score and work outcome

This is a myth that can be discussed with relative simplicity: Criterion validity is far more than the simple correlations that are found in technical manuals. Validity in this sense is more appropriately described as whether an assessment can deliver a proposed outcome in a given setting with a given group.  Criterion validity is thus ‘does this test predict some real world outcome in a real world setting’.

Assessments can add value, as discussed last month, but we need to think deeper about criterion related validity if this value is going to be more effectively demonstrated. Criterion validity is too often determined by correlating a scale on a test (e.g. extroversion) with an outcome (e.g. training). The problem is that neither the scale score nor the outcome exists in a vacuum. They are both sub-parts of greater systems (i.e. both consist of multiple variables). In the case of the test, the scale score is not independently exclusive. Rather, it is one scale among many that have been used to understand a person’s psychological space better (e.g. one of the big five scales). Any work outcome is the sum total of a system working together. Outcomes are likely to be impacted by variables; like the team a person is working in, or the environmental context (both micro and macro), what they are reinforced for, etc.. In a normal research design, these aspects are controlled for, but when it comes to criterion validity correlations reported by test publishers this is unlikely to be the case.

When it comes to criterion validity, we are very much in the dark as to how psychological variables impact work outcomes in the real world despite claims to know otherwise. As an example, let’s consider the variable of conscientiousness. The test publisher research tells us that the higher a person’s conscientiousness the better they are likely to perform on the job. Common sense would tell us that people who are excessively conscientious may however not perform well due to their need to achieve a level of perfection that detracts from delivery in a timely manner. Not surprisingly recent research does not support the idea of a linear correlation in that for many traits too much of the trait is detrimental: Le, H., Oh, I-S., Robbins, S.B., Ilies, R., Holland, E., & Westrick, P. (2011). Too much of a good thing: Curvilinear relationships between personality traits and job performance. Journal of Applied Psychology, 96, 1, 113-133.

This is supported by work I was involved in with Dr. Paul Wood that showed that intelligence and conscientiousness may be negatively correlated in certain circumstances and therefore indicate that there are multiple ways of completing a task to the level of proficiency required. Intelligence Compensation Theory: A Critical Examination of the Negative Relationship Between Conscientiousness and Fluid and Crystallised Intelligence The Australian and New Zealand Journal of Organisational Psychology / Volume 2 /August 2009, pp 19-29. The problem that both studies highlight is that we are simply looking at the concept of criterion validity in a too reductionist manner. These simple 1-1 correlations do not represent validity in terms of how the practitioner would think of the term (“is this going to help me select better”). This question cannot be answered because the question itself requires thinking about the interaction between psychological variables and the unique context that the test will be applied in.

To understand how the problem of validity has become an accepted norm, one must look to the various players in the field. As is often the case, a reductionist view of validity stems from associations such as the BPS, who have simplified the concept of validity to suit their requirements. This then forces test publishers to adhere to this and clamor over each other to produce tables of validity data. The practitioners then understand validity within this paradigm. To add injury to insult, the criteria of quality becomes: to have as many of these seemingly meaningless validity studies as possible, further proliferating this definition of validity. The fact that a closer look at these studies show validity correlation coefficients going off in all sorts of directions is seemingly lost, or deemed irrelevant!

The solution to this nonsense is that the way we think of criterion validity must change. We need to be taking a more holistic approach that is more thorough and system based to answer the real questions practitioners have. This would incorporate both qualitative and quantitative approaches, and is perhaps best captured in the practice of evaluation, which is taking this approach seriously: http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research.

Finally to survive the criteria used to evaluate tests, the likes of the BPS needs to change. Without this change test publishers cannot adopt alternative practices as their tests will not be deemed “up to standard”. So alas, I think we may be stuck with this myth for a bit longer yet.

Advertisements

Effective Talent Management

There is no doubt that more and more organisations are implementing talent management strategies and frameworks. However whilst talent management is fast becoming a strategic priority for many organisations, Collings & Mellahi (2009) suggest that the topic of talent management lacks a consistent definition and is still largely undefined. Literature reviews reveal that one reason for this is that the empirical question of “what is talent?” has been left unanswered.

The term talent has undergone considerable change over the years. It was originally used in the ancient world to denote a unit of money, before adopting a meaning of inclination or desire in the 13th century, and natural ability or aptitude in the 14th century (Tansley 2011, as cited in Meyers, Woerkom, & Dries, 2013). Today’s dictionary definition of talent is “someone who has a natural ability to be good at something, especially: without being taught” (Cambridge Dictionaries Online, 2014).  This definition implies that talent is innate rather than acquired. This holds important implications for the application of talent management in practice. For example, it influences whether we should focus more on the identification/selection of talent or the development of talent.

Talent management is defined as “an integrated, dynamic process, which enables organisations to define, acquire, develop, and retain the talent that it needs to meet its strategic objectives” (Bersin, 2008).

Integrated talent management implies we take a more holistic approach; starting with the identification of key positions and capabilities required which contribute to an organisations sustainable competitive advantage (Collings & Mellahi, 2009). Equipped with this information we are better able to gather talent intelligence to help determine capability gaps, identify individual potential, and any areas for development.  Talent intelligence and performance tools capable of gathering this type of information include: well validated psychometric assessments, 360° surveys, engagement surveys, post appointment and exit interviews etc. Strategic and integrated talent management is not only essential in the current market, but provides an opportunity to be pro-active rather than reactive in addressing your critical talent needs.

We suggest that key components of an effective talent management process would include:

  1. A clear understanding of the organisations current and future strategies.
  2. Knowledge of key positions and the associated knowledge, skills, and abilities required (job analysis and test validation projects can assist here).
  3. Objective metrics that identify gaps between the current and required talent to drive business success.
  4. A plan designed to close these gaps with targeted actions such as talent acquisition and talent development.
  5. Integration with HR systems and processes across the employee lifecycle.

What is clear is that talent management is becoming more and more important as organisations fight for the top talent in a tight job market. Key to success will be identifying what ‘talent’ looks like for your organisation and working to ensure they are fostered through the entire employment lifecycle.

 

Meyers, M. C., van Woerkom, M., & Dries, N. (2013). Talent—Innate or acquired? Theoretical considerations and their implications for talent management. Human Resource Management Review, 23(4), 305-321.

Collings, D. G., & Mellahi, K. (2009). Strategic talent management: A review and research agenda. Human Resource Management Review, 19(4), 304-313.

Bersin Associates. (2008). Talent Management Factbook.

Outplacement: What are ‘Employers of Choice’ doing in the Face of Job Cuts?

With the current downturn in the mining industry, management are making tough decisions regarding asset optimisation, cost management, risk management and profitability. Naturally, head count is being scrutinised more closely than ever. What isn’t hitting the headlines is what genuine ‘employers of choice’ are doing to support their exiting workforce and their remaining staff.

A leading Global Engineering Consultancy recently made a corporate decision to discontinue a once profitable consulting arm of their Australian operation. With increased competition, reduction in mining demand and eroding profit margin a very difficult restructure resulted in the redundancy of 40 national engineering roles. As an employee-owned organisation that lives its company values which include Teamwork, Caring, Integrity, and Excellence, this decision was not made easily. Throughout the decision-making process Management was naturally mindful to uphold these values, and BeilbyOPRA Consulting was engaged to provide Outplacement and Career Transition services to individuals for a period of up to 3 months.

The objectives of the project were to ensure that individual staff were adequately supported through this period of transition and ultimately, to gain alternate employment as quickly as possible.

BeilbyOPRA’s Solution:

BeilbyOPRA Consulting’s solution was led by a team of Organisational Psychologists and supported by Consultants being on site in seven locations throughout Australia on the day that the restructure was communicated to employees. Consultants provided immediate support to displaced individuals through an initial face-to-face meeting, where the Career Transition program was introduced.  From here, individuals chose whether or not to participate in the program, the key topics of which included:

  • Taking Stock – Understanding and effectively managing the emotional reactions to job change.
  • Assessment – Identifying skills and achievements through psychometric assessment and feedback sessions.
  • Preparation – Learning about time management skills; developing effective marketing tools; resume writing and cover letter preparation; telephone techniques.
  • Avenues to Job Hunting – Tapping into the hidden job market; responding to advertisements; connecting with recruitment consultants.
  • Interviews – Formats; preparation; how to achieve a successful interview.
  • Financial Advice – BeilbyOPRA partnered with a national financial services firm to offer participants complimentary financial advice.

 The Outcome:

Of the 40 individuals whose positions were made redundant:

  • 78% engaged in the first day of the program.
  • Of this group, 48% participated in the full program, as the remainder only utilised one or two of the services before securing employment.
  • 83% of those who participated in the full program gained employment within 3 months

Some of the learning outcomes from this project for organisations include:

  • Conduct thorough due diligence before committing to the restructure.
  • Create a steering committee to oversee the redundancy process.
  • Ensure accurate, relevant and timely communication is provided to all those involved.
  • Have a trial run of the entire process.
  • Have a dedicated internal project manager to facilitate the outplacement project.
  • Ensure that the staff who remain employed with your organisation, ‘the survivors’, are informed and supported.

In summary, the value of outplacement support was best captured by the National HR Manager who stated:

“It is about supporting staff and upholding our values through good and difficult times. From a legal, cultural and branding perspective outplacement support is critical. As the market changes we will hope to re-employ some of the affected staff and some will become clients in the future’.

Culture Surveys and Your Organisation

Measuring culture and attaining data can provide valuable information for any size organisation. How this data is positioned, analysed, and used however is where the real value can be found. Schneider, Ehrhart, and Macey (2013) assert that when looking past Organisational Culture from a scholarly perspective, executives in organisations wish to know what their corporate culture is, understand what they can change and how, and how they can create competitive advantage through organisational culture. Although the first step of the process appears to be the measurement of culture, there are in fact many other steps to consider in the process. Below are some points to consider when measuring employee data in an organisation.

  1. Reasons for engaging in a measurement tool

When implementing a measurement process in an organisation it is important to clearly define the reasons for doing so. Is it for the board, customers, or stakeholders benefit? Is it for the benefit of the executive team to guide future planning? Is it an affirmation to HR that they are on the right track? Or is it to develop the best company in all sense of the word. It is important to set expectations of what will be done with the data. Asking employees to invest time to respond to workplace surveys can inevitably lead them to expect time invested back in explaining the results and strategies for the future. Understanding from the outset the reasons for using the tool is important.

2. Deciding on a measuring tool

Not all survey tools are created equal. In order to have a robust process it is important that the tools used are fit for purpose, and are reliable and valid. Gaining an accurate picture of the current organisational culture means that decisions made about future initiatives are made on the basis of sound data. A sound measuring tool should pass a series of psychometric tests, provide evidence that individual data can be aggregated to the organisational level, and be linked to performance (Denison Culture, 2013).

3. Leveraging the data to create competitive advantage

Once data has been obtained, an action plan around next steps needs to be developed. This can include things such as creating concrete plans for the future based on an accurate understanding of culture survey results; assessing current leadership and “people” need; understanding of how engaging and leveraging human capital can be attained.

4. Repeat

Measuring progress and obtaining feedback for continued improvement based on a clear set of business performance and organisational culture metrics is important for sustained culture improvement and change.

Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual review of psychology64, 361-388.

Denison Culture (2013). What are you really measuring with a culture survey? Denison research notes, 8, 1.

The myth of significance testing

When I decided to leave work and go to University to study psychology I did so because of a genuine fascination with the study of human behaviour, thought, and emotion. Like many I was drawn to the discipline not by the allure of science but by the writings of Freud, Jung, Maslow, and Fromm. I believed at the time that the discipline was as much philosophy as it was science and had the romantic notion of sitting in the quad talking theory with my class mates.

Unfortunately from day one I was introduced not to the theory of psychology but the maths of psychology. This, I was told, was the heart of the discipline and supporting evidence came not from the strength of the theory but from the numbers. It did not matter that, as an 18 year old male I was supremely conscious of the power of libidos. Unless it could be demonstrated on a Likert scale it did not exist. The gold standard supporting evidence was significance testing.

I always struggled with the notion that the significance test (ST) was indeed as significant as my professors would have me believe. However it was not until I completed my post graduate diploma in applied statistics that the folly of ST truly came home to me. Here for the first time I was introduced to the concept of fishing for results and techniques such as the Bonferroni correction (http://en.wikipedia.org/wiki/Bonferroni_correction). Moreover I truly understood how paltry the findings in psychology were and to establish robustness of such findings through a significance test was somewhat oxymoronic.

In 2012 a seminal paper on this topic came out and I would encourage everyone who works in our field to be aware of it. This is indeed the myth for this month: the myth of significance testing:

Lambdin, C. (2012) Significance tests as sorcery: Science is empirical – significance tests are not. Theory and Psychology, 22, 1, 67-90.

Abstract

Since the 1930s, many of our top methodologists have argued that significance tests are not conducive to science. Bakan (1966) believed that “everyone knows this” and that we slavishly lean on the crutch of significance testing because, if we didn’t, much of psychology would simply fall apart. If he was right, then significance testing is tantamount to psychology’s “dirty little secret.” This paper will revisit and summarize the arguments of those who have been trying to tell us— for more than 70 years—that p values are not empirical. If these arguments are sound, then the continuing popularity of significance tests in our peer-reviewed journals is at best embarrassing and at worst intellectually dishonest.

The paper is a relatively easy read and the arguments are simple to understand:

“… Lykken (1968), who argues that many correlations in psychology have effect sizes so small that it is questionable whether they constitute actual relationships above the “ambient correlation noise” that is always present in the real world. Blinkhorn and Johnson (1990) persuasively argue, for instance, that a shift away from “culling tabular asterisks” in psychology would likely cause the entire field of personality testing to disappear altogether. Looking at a table of results and highlighting which ones are significant is, after all, akin to throwing quarters in the air and noting which ones land heads.”  (ala fishing for results)

The impact of this paper for so much of the discipline cannot be over stated. In an attempt to have a level of credibility beyond its station psychological literature has bordered on the downright fraudulent in making sweeping claims from weak but significant results. The impact is that our discipline becomes the laughing stock of future generations who will see through the emperors clothes that are currently parading as science.

“ … The most unfortunate consequence of psychology’s obsession with NHST is nothing less than the sad state of our entire body of literature. Our morbid overreliance on significance testing has left in its wake a body of literature so rife with contradictions that peer-reviewed “findings” can quite easily be culled to back almost any position, no matter how absurd or fantastic. Such positions, which all taken together are contradictory, typically yield embarrassingly little predictive power, and fail to gel into any sort of cohesive picture of reality, are nevertheless separately propped up by their own individual lists of supportive references. All this is foolhardily done while blissfully ignoring the fact that the tallying of supportive references—a practice which Taleb (2007) calls “naïve empiricism”—is not actually scientific. It is the quality of the evidence and the validity and soundness of the arguments that matters, not how many authors are in agreement. Science is not a democracy.

It would be difficult to overstress this point. Card sharps can stack decks so that arranged sequences of cards appear randomly shuffled. Researchers can stack data so that random numbers seem to be convincing patterns of evidence, and often end up doing just that wholly without intention. The bitter irony of it all is that our peer-reviewed journals, our hallmark of what counts as scientific writing, are partly to blame. They do, after all, help keep the tyranny of NHST alive, and “[t]he end result is that our literature is comprised mainly of uncorroborated, one-shot studies whose value is questionable for academics and practitioners alike” (Hubbard & Armstrong, 2006, p. 115).” P. 82

 Is there a solution to this madness? Using the psychometric testing industry as a case in point I believe the solution is multi-pronged. ST’s will continue to be part of our supporting literature as they are the requirement of the marketplace and without them test publishers will not be viewed credibly. However through education such as training for test users, this can be balanced so that the reality of ST’s can be better understood. This will include understanding the true variance that is accounted for in tests of correlation and therefore the true significance of the significance test will be understood! This will need to be equally matched with an understanding of the importance of theory building when testing a hypothesis and required alterations such as Bonferroni correction when conducted multiple tests with one set of data.  Finally, in keeping with the theme in this series of blogs the key is to treat the discipline as a craft not a science. Building theory, applying results in unique and meaningful ways and being focussed on practical outcomes is more important and more reflective of sound practice then militant adherence to a significance test.

P.S. For those interested in understanding how to use statistics as a craft to formulate applied solutions I strongly recommend this book http://www.goodreads.com/book/show/226575.Statistics_As_Principled_Argument

P.P.S. This article just out http://www.theguardian.com/science/head-quarters/2014/jan/24/the-changing-face-of-psychology . Seems that there may be hope for the discipline yet.

Welcome Onboard. Tips for Staff Recruitment by Dr. Sarah Burke

I estimate there will be a lot of ‘first days’ for staff in January 2014, if the volume of assessment testing for recruitment that we did leading up to Christmas is anything to go by.  But consider these facts:

•       Half of all senior external hires fail within 18 months in a new position;

•       Almost 1/3 of all new hires employed for less than 6 months are already job searching;

•       According to the US Dept of Labour, a total of 25% of the working population undergoes a career transition each year.

This level of churn comes at a cost. Estimates of direct and indirect costs for a failed executive-level hire can be as high as $2.7 million (Watkins, 2003).  And for each employee who moves on, there is many others in the extended network – peers, bosses, and direct reports whose performance is also influenced.  One of the important ways that HR can positively impact on this level of churn is through the strategic use of a process known as onboarding.

What is Onboarding?

Employee onboarding is the process of getting new hires positively adjusted to the role, social, and cultural aspects of their new jobs as quickly and smoothly as possible. It is a process through which new hires learn the knowledge, skills, and behaviours required to function effectively within an organisation. The bottom line is that the sooner we can bring people up to speed in their roles and wider organisation, the more expediently they will contribute.

Conventional wisdom is that a new hire will take approximately 6 months before they can meaningfully contribute (Watkins, 2003).  I suspect that for most organisations, a 6 month lag time before seeing a return on a new hire is untenable, particularly in the NZ economy when 97.2% of us employ less than 20 staff (MBIE Fact Sheet, 2013).  One of the important ways that HR can accelerate the adjustment process for new hires is by having an onboarding programme that is given a profile inside the business, and supported by key staff.

While the specifics of an onboarding programme can vary organisation to organisation, the below is offered as a guide for HR managers to proactively manage their onboarding efforts.  Please review my presentation Welcome Onboard for more direction in terms of supporting staff in the initial days, weeks, and months of their employment.

 Top Tips for Supporting Staff Onboarding:

  • Make good use of the pre-start to get the workspace organised, to schedule key meetings, and for sharing useful organisational and team information (i.e., team bio’s, blogs, key organisational reading).
  • Give your onboarding programme a brand/logo/tagline that communicates the experience and gives it importance/profile.
  • Customise your onboarding programme to reflect individual need; onboarding is not a one-size fits all.
  • Personalise the first day, including a formal announcement of entry
  • Create an onboarding plan detailing key projects, firsts, objectives, and deliverables that are expected by your new hire.
  • Monitor progress over time using milestones; 30 – 60 – 90 – 120 days up to 1 year post-entry.
  • Identify 2-3 quick wins that your new hire can take responsibility for in order to build credibility and establish momentum (note: a quick win must be a meaningful win, not necessarily a big win).
  • Involve your new hire in projects that will require working cross-functionally.
  • Include organisational role models as mentors and coaches.  Remember a relatively small set of connections is far better than a lot of superficial acquaintances.
  • Be prepared to provide initial structure and direction to your new hire.  Remember, most people if thrown in the deep end to ‘sink or swim’ will sink.
  • Use technology to facilitate the onboarding process, including the flow of information.

2014: Exploring the Myths of I/O Psychology a Month at a Time

For those that may not be aware, the ‘Science of Science’ is in disarray. Everything is currently under the microscope as to what constitutes good science, what is indeed scientific and the objectivity and impartiality of science. This is impacting many areas of science and has even led to a Noble Prize winner boycotting the most prestigious journals in his field.

Nobel winner declares boycott of top science journals: Randy Schekman says his lab will no longer send papers to Nature, Cell and Science as they distort scientific process.

This pervading problem in the field of science is perhaps best covered in the highly cited Economistarticle ’How Science Goes Wrong’

This questioning of science is perhaps no more apparent than in our discipline of I/O Psychology. Through various forums, and academic and non-academic press, I have been made increasingly aware of the barrage of critical thinking that is going on in our field. The result: much of what we have taken to be true as I/O psychs is nothing more than fable and wishful thinking.

Over this year I want to explore one myth each month with readers of our blog. They will be myths about the very heart of I/O Psychology that are often simply taken as a given.

The idea of attacking myths has long been central to OPRA’s philosophy:

 And there are many myth busting blogs in this forum:

To kick off this new series I wish to start with the current state of play in the field. In particular, the fundamental problem that questionable research practices often arise when there is an incentive for getting a certain outcome.

John, L.K., Loewenstein, G., & Prelec, D. (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science OnlineFirst.  A description of the study with comments at: http://bps-research-digest.blogspot.co.nz/2011/12/questionable-research-practices-are.html

This results in a well-known fact to all in the publish or perish game that your best chance of getting published is not necessarily the quality of the research but rather is correlated with a null hypothesis not being supported (i.e. you have a ‘eureka’ moment, however arbitrary).

Fanelli, D. (2010) “Positive” results increase down the hierarchy of the sciences. http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0010068

Gerber, A.S., & Malhotra, N. (2008) Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociological Methods & Research, 37, 1, 3-30.

The output is that the bulk of the research in our area is trivial in nature, is not replicated and simply does not support the claims that are being made. This is especially the case in psychology where the claims often go from exaggerated to the absurd.

Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology (http://dx.doi.org/10.1016/j.jmp.2013.02.003 ), Earlyview, , 1-17.

Scientific methods of investigation offer systematic ways to gather information about the world; and in the field of psychology application of such methods should lead to a better understanding of human behavior. Instead, recent reports in psychological science have used apparently scientific methods to report strong evidence for unbelievable claims such as precognition. To try to resolve the apparent conflict between unbelievable claims and the scientific method many researchers turn to empirical replication to reveal the truth. Such an approach relies on the belief that true phenomena can be successfully demonstrated in well-designed experiments, and the ability to reliably reproduce an experimental outcome is widely considered the gold standard of scientific investigations. Unfortunately, this view is incorrect; and misunderstandings about replication contribute to the conflicts in psychological science. …… Overall, the methods are extremely conservative about reporting inconsistency when experiments are run properly and reported fully.

The paucity of quality scientific research is leading to more and more calls for fundamental change in how what qualifies as good science and research in our field.

Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K., Gerber, A., Glennerster, R.,Green, D., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B.A., …, Simonsohn, U., & Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343, 6166, 30-31.

“There is growing appreciation for the advantages of experimentation in the social sciences. Policy-relevant claims that in the past were backed by theoretical arguments and inconclusive correlations are now being investigated using more credible methods. ….Accompanying these changes, however, is a growing sense that the incentives, norms, and institutions under which social science operates undermine gains from improved research design. Commentators point to a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results are published more easily than null, replication, or perplexing results”.

D.C., Levine, J.M., Mackie, D.M., Morf, C.C., Vazire, S., & West, S.G. (2013). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice. Personality and Social Psychology Review, EarlyView, , 1-10.

In this article, the Society for Personality and Social Psychology (SPSP) Task Force on Publication and Research Practices offers a brief statistical primer and recommendations for improving the dependability of research. Recommendations for research practice include (a) describing and addressing the choice of N (sample size) and consequent issues of statistical power, (b) reporting effect sizes and 95% confidence intervals (CIs), (c) avoiding “questionable research practices” that can inflate the probability of Type I error, (d) making available research materials necessary to replicate reported results, (e) adhering to SPSP’s data sharing policy, (f) encouraging publication of high-quality replication studies, and (g) maintaining flexibility and openness to alternative standards and methods. Recommendations for educational practice include (a) encouraging a culture of “getting it right,” (b) teaching and encouraging transparency of data reporting, (c) improving methodological instruction, and (d) modeling sound science and supporting junior researchers who seek to “get it right.”

Cumming, G. (2013). The New Statistics: Why and how. Psychological Science, EarlyView, , 1-23.

We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include pre-specification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.

But it is not all doom and gloom. There are simple steps that the Scientist/Practitioner can take to make sure that sense and sensibility is more pervasive in the field.  In this regard I offer 3 key simple principles:

  1. Try your best to keep up to date with the literature: OPRA will do their best to publish relevant pieces that come to their attention via this blog!
  2. Don’t make exaggerated claims: Remember that no one has ‘magic beans’ as ‘magic beans’ do not exist. Dealing with human problems invariably involves complexity and levels of prediction that are far from perfect.
  3. Accept our discipline is a craft not a science: I/O Psychology involves good theory, good science, and sensible qualitative and quantitative evidence but is often applied in a unique manner, as would a craftsman (or craftsperson – if such a word exists). Accepting this fact will liberate the I/O Psychologist to use science, statistics and logic to produce the solutions that the industry, and more specifically, their clients require.

Keep an eye on our blog this coming year for exploring myths and other relevant information or products related to our field. Let us know if something is of interest to you and we can blog about it or send you more information directly.

Emotionally Intelligent Leadership

Emotionally intelligent leadership:

Game changing for business, life changing for people.
By Ben Palmer

If you are a leader in business looking to improve your organisation’s performance you might want to consider improving your capacity to identify, understand and manage emotion, that is, your emotional intelligence. A wide number of research studies over the last decade have shown there’s a direct link between the way people feel and they way people perform in the workplace. For example, research conducted by the Society for Knowledge Economics here in the Australian labour market, found people in high performing workplaces typically feel more proud, valued and optimistic than those in low performing workplaces. Conversely, people in low performing Australian workplaces people typically feel more inadequate, anxious and fearful. Leadership is fundamentally about facilitating performance. Research on emotional intelligence has proven that a leader’s emotional intelligence is key to their capacity to facilitate emotions in employees that drive high employee engagement and performance.

To illustrate this point Genos International, part owned by Swinburne University (a human resource consulting company that specialises in the development of leaders emotional intelligence www.genosintenrational.com), together with Sanofi (the worlds fourth largest pharmaceutical company www.sanofi.com) teamed up to investigate whether the development of sales leaders emotional intelligence would improve the amount of sales revenue generated by their sales representatives. In order to control for market influences Sanofi randomly placed 70 sales representatives (matched in terms of tenure and current performance) into two groups:

1.The control group, this group and their managers received no emotional intelligence development training) and
2.The development group, the managers of this group participated in Genos International’s award winning emotional intelligence development program.

The Genos development program involves an emotional intelligence assessment for each person before and after the program (to create self-awareness and measure behaviour change) together with a number of short, focused development sessions over a six month period on:

  1. How to improve your capacity to identify emotions, and 
  2. How to improve your capacity to effectively regulate and manage emotions

Development in these areas makes leaders more self-aware, more empathetic, more genuine and trustworthy, more personally resilient, and better at influencing others emotions. Ultimately it helps leaders make their employees feel more valued, cared for, respected, informed, consulted and understood. On average, the emotional intelligence of the sales managers improved by 18 percent. As can be seen in the graph below this helped facilitate, an on average 13% improvement in the Development Group’s sales performance in comparison to the Control Group’s. There was a 7.1% improvement in the first month following the program, a 15.4% improvement the month after and 13.4% improvement the month after that (as measured by retail sales revenue by territory). The revenue of the Control Group stayed flat an in the same revenue band during this period.

Capture

The improvements in revenue generated by the Development Group returned approximately $6 dollars for every $1 Sanofi invested in the program. The findings of the study have been published in a peer-reviewed journal which can be downloaded from the Genos website (http://static.genosinternational.com/pdf/Jennings_Palmer_2007.pdf).

Feedback from the participants showed the program not only helped improve the sales performance of reps and their managers. It also helped them improve their relationships with each other. At the time employees were navigating a difficult time within the business as bumps from a merger were ironed out and two different company cultures integrated. As one participant put it I have seen improvements in behaviour that have increased the bottom line with sales reps. From a management perspective, increased skills that have lead to more buy-in, acceptance, spirit improved, and better communication. However the greatest benefit I received from the program was an improved relationship with my 14yr daughter”.

This participant feedback highlights the added benefits of improving your emotional intelligence. Your capacity to identify, understand and manage emotions contributes to your life satisfaction, stress management and the quality of your relationships at home and at work. That’s why developing your emotional intelligence can be game changing for your business, and life changing for you and your people.

 To improve your skill at identifying and understanding emotions you can:

  1. Stop and reflect on the way you feel in the moment. Take the time to label the feelings you are experiencing and reflect on the way they might be influencing your thinking, behaviour and performance.
  2. Become more aware of other characteristics that interplay and indeed cause you to experience emotions such as your personality, values and beliefs. By understanding these you can become better at identifying different emotional triggers and they way you (and others), typically respond to them. This awareness is key to adjusting the way you feel and respond to events.

  To improve your skill at managing emotions you can:

  1.  Eat better, sleep more, drink less and exercise (if you aren’t already).
  2. Adopt a thinking oriented emotional management strategy, like Edward Debono’s 6 thinking hats, use it when strong emotions arise.
  3. Adopt a relationship strategy, someone who’s great at listening and helping facilitate perspective on events.
  4. Search the app store, there are some great emotional management apps out there today. For example Stress Doctor, a revolutionary mobile app that helps you reduce your stress level in just 5 minutes via a biofeedback technique to help sync your breathing rate with your autonomous nervous system (ANS).

If you would like more information on Enduring Impact Leadership Training please contact auckland@opragroup.com.

Hawthorne Effect (Being Watched May Not Effect Behaviour At All)

For those that don’t know the Economist is a fantastic newspaper. Not limited to economic news the Economist provides a synopsis across many disciplines of current topic research and findings. Often there is a section on Psychology and as a tribute to the economist in my few short blogs I will cover some key research covered in recent editions of the Economist.

Most I/O Psychologists are familiar with the Hawthorne effect – A 1924 study conducted by America’s National Research Council to examine how shop floor lighting affected workers productivity. The key finding was that rather than lighting having an effect, the act of being experimented upon changed a subject’s behaviour. The data from the illumination experiments had never been rigorously analysed and were believed lost, recently however, the data has been discovered and reanalysed. Contrary to the description in the literature, there was no systematic evidence that levels of productivity in the factory rose, whatever changes were implemented. Rather, the changes in behaviour were due to days of the week with output always highest on a Monday which was also the day when changes were implemented. Much of Psychology is folklore and most data sets are simply never revisited. How many findings in Psychology that are taken as ‘golden rules’ simply are the result of an historical misinterpretation of data?