Myth 3: The Myth of Measurement

I would like to begin by apologising for not getting a myth out last month. I was working in the Philippines. Having just arrived back in Singapore I will make sure to get out two myths this month.

The first myth for April that I wish to highlight is a myth that some may see as almost to commit sacrilege in the industry. The idea that I wish to challenge is that I/O psychology can truly be classed as a strong measurement science. To be clear, I’m not saying that I/O is not a science or that it does not attempt to measure aspects of human behaviour related to work. Rather what I’m suggesting is that it is not measurement as the word is commonly used. The corollary is to talk of measurement in our field if it was similar to the common use of the term and in doing so give the discipline more predictability and rigour than it deserves.

The classic paper that challenged my thinking in regards measurement was ‘Is Psychometrics Pathological Science‘ by Joel Michell.

Abstract

Pathology of science occurs when the normal processes of scientific investigation break down and a hypothesis is accepted as true within the mainstream of a discipline without a serious attempt being made to test it and without any recognition that this is happening. It is argued that this has happened in psychometrics: The hypothesis, upon which it is premised, that psychological attributes are quantitative, is accepted within the mainstream, and not only do psychometricians fail to acknowledge this, but they hardly recognize the existence of this hypothesis at all.

In regards to measurement, Michell presents very clear and concise arguments about what constitutes measurable phenomena and why psychological attributes fail this test. While in parts these axioms are relatively technical, the upshot is that just because a quantitative variable is ordered does not itself constitute measurement. Rather, ‘measurement’ requires further hurdles to be adhered to. A broad example of this concept is addititivity and the many associated operations that come when variables (or combinations) are added to produce a third variable, or provide support for an alternative equation. Psychological attributes fail on this and many other properties of measurement. As such, the basis for claims of measurement, in my opinion, are limited (or at least come with caution and disclaimers) and therefore the basis for much of the claim to being part of the ‘measurement-based-science’ school is not substantiated.

The limitations of the discipline as a measurement science is so fundamental that it should challenge the discipline far more than is currently so. The outcome should be both a downplaying of measurement practices and a greater focus on areas such as theory building which is then tested using a range of alternative methodologies. These same calls for the discipline have been made over the past few years and the disquiet in the discipline is growing:

Klein, S.B. (2014). What can recent replication failures tell us about the theoretical commitments of psychology? Theory and Psychology, 1-14.

Abstract

I suggest that the recent, highly visible, and often heated debate over failures to replicate results in the social sciences reveals more than the need for greater attention to the pragmatics and value of empirical falsification. It is also a symptom of a serious issue—the under-developed state of theory in many areas of psychology.

Krause, M.S. (2012). Measurement validity is fundamentally a matter of definition, not correlation. Journal of General Psychology, 16, 4, 391-400.

Abstract

….However, scientific theories can only be known to be true insofar as they have already been demonstrated to be true by valid measurements. Therefore, only the nature of a measure that produces the measurements for representing a dimension can justify claims that these measurements are valid for that dimension, and this is ultimately exclusively a matter of the normative definition of that dimension in the science that involves that dimension. Thus, contrary to the presently prevailing theory of construct validity, a measure’s measurements themselves logically cannot at all indicate their own validity or invalidity by how they relate to other measures’ measurements unless these latter are already known to be valid and the theories represented by all these several measures’ measurements are already known to be true….This makes it essential for each basic science to achieve normative conceptual analyses and definitions for each of the dimensions in terms of which it describes and causally explains its phenomena.

Krause, M.S. (2013). The data analytic implications of human psychology’s dimensions being ordinally scaled. Journal of General Psychology, 17, 3, 318-325.

Abstract

Scientific findings involve description, and description requires measurements on the dimensions descriptive of the phenomena described. …Many of the dimensions of human psychological phenomena, including those of psychotherapy, are naturally gradated only ordinally. So descriptions of these phenomena locate them in merely ordinal hyperspaces, which impose severe constraints on data analysis for inducing or testing explanatory theory involving them. Therefore, it is important to be clear about what these constraints are and so what properly can be concluded on the basis of ordinal-scale multivariate data, which also provides a test for methods that are proposed to transform ordinal-scale data into ratio-scale data (e.g., classical test theory, item response theory, additive conjoint measurement), because such transformations must not violate these constraints and so distort descriptions of studied phenomena.

What these papers identify is that:

  1. We must start with good theory building and the theory must be deep and wide enough to enable the theory to be tested and falsified.
  2. That construct validity is indeed important but correlations between tests are not enough. We need agreement on the meaning of attributes (such as the Big Five).
  3. That treating comparative data (such as scores on a normal curve) as if it were rigorous measurement is at best misleading and at worst fraud.

So where does this leave the discipline? Again, as is the theme threading through all these myths, we must embrace the true scientist/practioner model and recognise that our discipline is a craft. To overly rely on quantitative techniques is actually extremely limiting for the discipline and we need alternative ways of conceptualising ‘measurement’. In this regard I’m a big fan of the evaluation literature (e.g. Reflecting on the past and future of evaluation: Michael Scriven on the differences between evaluation and social science research) as providing alternative paradigms to solve I/O problems.

We must at the same time embrace the call for better theory building. If I/O Psychology, and psychology in general, is going to have valuable contributions to the development of human thought it will start with good, sound theory. Just putting numbers to things does not constitute theory building.

When using numbers we must also look for alternative statistical techniques to support our work. An example is Grice’s (2011): Observation Oriented Modelling: Analysis of cause in the behavioural sciences.  I looked at this work when thinking about how we assess reliability (and then statistical demonstrate it) and think it has huge implications.

Finally, when using numbers to substantiate an argument, support theory, or find evidence for an intervention we need to be clear on what they are really saying. Stats can lie and at best mislead and we must be clear as to what we are and are not saying, as well as the limitation in any conclusions we draw from a reliance on data. To present numbers as if they that had measurement robustness is simply wrong.

In the next blog I want to discuss the myth of impartiality and why these myths continue to pervade the discipline.

Acknowledgement: I would like to acknowledge Professor Paul Barrett for his thought leadership in this space and opening my eyes to the depth of measurement issues we face. Paul brought to my attention the articles cited and I’m deeply grateful for his impact on my thinking and continued professional growth.

Posted in I/O Psychology | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Culture Surveys and Your Organisation

Measuring culture and attaining data can provide valuable information for any size organisation. How this data is positioned, analysed, and used however is where the real value can be found. Schneider, Ehrhart, and Macey (2013) assert that when looking past Organisational Culture from a scholarly perspective, executives in organisations wish to know what their corporate culture is, understand what they can change and how, and how they can create competitive advantage through organisational culture. Although the first step of the process appears to be the measurement of culture, there are in fact many other steps to consider in the process. Below are some points to consider when measuring employee data in an organisation.

  1. Reasons for engaging in a measurement tool

When implementing a measurement process in an organisation it is important to clearly define the reasons for doing so. Is it for the board, customers, or stakeholders benefit? Is it for the benefit of the executive team to guide future planning? Is it an affirmation to HR that they are on the right track? Or is it to develop the best company in all sense of the word. It is important to set expectations of what will be done with the data. Asking employees to invest time to respond to workplace surveys can inevitably lead them to expect time invested back in explaining the results and strategies for the future. Understanding from the outset the reasons for using the tool is important.

2. Deciding on a measuring tool

Not all survey tools are created equal. In order to have a robust process it is important that the tools used are fit for purpose, and are reliable and valid. Gaining an accurate picture of the current organisational culture means that decisions made about future initiatives are made on the basis of sound data. A sound measuring tool should pass a series of psychometric tests, provide evidence that individual data can be aggregated to the organisational level, and be linked to performance (Denison Culture, 2013).

3. Leveraging the data to create competitive advantage

Once data has been obtained, an action plan around next steps needs to be developed. This can include things such as creating concrete plans for the future based on an accurate understanding of culture survey results; assessing current leadership and “people” need; understanding of how engaging and leveraging human capital can be attained.

4. Repeat

Measuring progress and obtaining feedback for continued improvement based on a clear set of business performance and organisational culture metrics is important for sustained culture improvement and change.

Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual review of psychology64, 361-388.

Denison Culture (2013). What are you really measuring with a culture survey? Denison research notes, 8, 1.

Posted in Survey | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Why do people leave, and what can we do about it?

Why do people leave, and what can we do about it?

We all know that high turnover can be a major issue – it is both expensive and time consuming for an organisation to be continually replacing staff and training new employees. In 2013 it was estimated that in New Zealand the average employee turnover rate sat between 11% and 20%, while in 2012 it was estimated to sit at 17.7%. So, how can organisations reduce turnover and the associated expense?  This is not by any means a simple question to answer, and generally leads to the bigger question of why people leave organisations.

The number of variables that go into why an individual chooses to leave their job and/or organisation is huge. They may have been unhappy with their pay, it may have been an issue with their manager, it could be an issue with the job itself, they may want better training opportunities, and the list goes on. Due to the variety of factors involved in turnover, it makes sense to ask staff why they’re leaving, and then use that information to implement changes that are directly targeted at why people move on. This is where the exit interview comes in.

There are some great examples out there of organisations who were spending millions of dollars on turnover per year, who have then implemented a strategic approach to utilising exit interview data and have managed to significantly reduce turnover and the associated costs. On the other hand, there are also many examples of organisations conducting exit interviews and seeing no benefits. So how can your organisation achieve reduced turnover with exit interviewing?

Not all exit interviews are going to give you useful data, and, you will only get useful data out of the exit interview process if you know how to use it. Firstly, exit interviews need to be designed well. Questions should cover the most common reasons that people leave, and provide clear, actionable data. Exit interviews should not be too long, and they should provide opportunity for free comments as well as quantitative ratings.

They should also be easy to complete and analyse.  Online exit interviews have been found to lead to significantly larger participation rates compared to paper and pencil, and also facilitate effective and efficient use of the data. In the click of a button exiting employees can be sent an online exit interview, which can then be filled in at their own convenience or alternatively in a phone call with HR or an outside consultant. The data can then be reported on at an individual, group, and organisational level at any frequency, providing useful information and trends about why people leave the organisation. Such an approach is also cost and time effective, while giving you clear direction on how to keep people for longer.

Why wouldn’t you want to take a strategic approach to exit interviews?

For information about OPRA’s exit interview offering please see www.exitinterviewer.com

Posted in Survey | Tagged , , , , , , , , , , , , , , , | Leave a comment

The myth of significance testing

When I decided to leave work and go to University to study psychology I did so because of a genuine fascination with the study of human behaviour, thought, and emotion. Like many I was drawn to the discipline not by the allure of science but by the writings of Freud, Jung, Maslow, and Fromm. I believed at the time that the discipline was as much philosophy as it was science and had the romantic notion of sitting in the quad talking theory with my class mates.

Unfortunately from day one I was introduced not to the theory of psychology but the maths of psychology. This, I was told, was the heart of the discipline and supporting evidence came not from the strength of the theory but from the numbers. It did not matter that, as an 18 year old male I was supremely conscious of the power of libidos. Unless it could be demonstrated on a Likert scale it did not exist. The gold standard supporting evidence was significance testing.

I always struggled with the notion that the significance test (ST) was indeed as significant as my professors would have me believe. However it was not until I completed my post graduate diploma in applied statistics that the folly of ST truly came home to me. Here for the first time I was introduced to the concept of fishing for results and techniques such as the Bonferroni correction (http://en.wikipedia.org/wiki/Bonferroni_correction). Moreover I truly understood how paltry the findings in psychology were and to establish robustness of such findings through a significance test was somewhat oxymoronic.

In 2012 a seminal paper on this topic came out and I would encourage everyone who works in our field to be aware of it. This is indeed the myth for this month: the myth of significance testing:

Lambdin, C. (2012) Significance tests as sorcery: Science is empirical – significance tests are not. Theory and Psychology, 22, 1, 67-90.

Abstract

Since the 1930s, many of our top methodologists have argued that significance tests are not conducive to science. Bakan (1966) believed that “everyone knows this” and that we slavishly lean on the crutch of significance testing because, if we didn’t, much of psychology would simply fall apart. If he was right, then significance testing is tantamount to psychology’s “dirty little secret.” This paper will revisit and summarize the arguments of those who have been trying to tell us— for more than 70 years—that p values are not empirical. If these arguments are sound, then the continuing popularity of significance tests in our peer-reviewed journals is at best embarrassing and at worst intellectually dishonest.

The paper is a relatively easy read and the arguments are simple to understand:

“… Lykken (1968), who argues that many correlations in psychology have effect sizes so small that it is questionable whether they constitute actual relationships above the “ambient correlation noise” that is always present in the real world. Blinkhorn and Johnson (1990) persuasively argue, for instance, that a shift away from “culling tabular asterisks” in psychology would likely cause the entire field of personality testing to disappear altogether. Looking at a table of results and highlighting which ones are significant is, after all, akin to throwing quarters in the air and noting which ones land heads.”  (ala fishing for results)

The impact of this paper for so much of the discipline cannot be over stated. In an attempt to have a level of credibility beyond its station psychological literature has bordered on the downright fraudulent in making sweeping claims from weak but significant results. The impact is that our discipline becomes the laughing stock of future generations who will see through the emperors clothes that are currently parading as science.

“ … The most unfortunate consequence of psychology’s obsession with NHST is nothing less than the sad state of our entire body of literature. Our morbid overreliance on significance testing has left in its wake a body of literature so rife with contradictions that peer-reviewed “findings” can quite easily be culled to back almost any position, no matter how absurd or fantastic. Such positions, which all taken together are contradictory, typically yield embarrassingly little predictive power, and fail to gel into any sort of cohesive picture of reality, are nevertheless separately propped up by their own individual lists of supportive references. All this is foolhardily done while blissfully ignoring the fact that the tallying of supportive references—a practice which Taleb (2007) calls “naïve empiricism”—is not actually scientific. It is the quality of the evidence and the validity and soundness of the arguments that matters, not how many authors are in agreement. Science is not a democracy.

It would be difficult to overstress this point. Card sharps can stack decks so that arranged sequences of cards appear randomly shuffled. Researchers can stack data so that random numbers seem to be convincing patterns of evidence, and often end up doing just that wholly without intention. The bitter irony of it all is that our peer-reviewed journals, our hallmark of what counts as scientific writing, are partly to blame. They do, after all, help keep the tyranny of NHST alive, and “[t]he end result is that our literature is comprised mainly of uncorroborated, one-shot studies whose value is questionable for academics and practitioners alike” (Hubbard & Armstrong, 2006, p. 115).” P. 82

 Is there a solution to this madness? Using the psychometric testing industry as a case in point I believe the solution is multi-pronged. ST’s will continue to be part of our supporting literature as they are the requirement of the marketplace and without them test publishers will not be viewed credibly. However through education such as training for test users, this can be balanced so that the reality of ST’s can be better understood. This will include understanding the true variance that is accounted for in tests of correlation and therefore the true significance of the significance test will be understood! This will need to be equally matched with an understanding of the importance of theory building when testing a hypothesis and required alterations such as Bonferroni correction when conducted multiple tests with one set of data.  Finally, in keeping with the theme in this series of blogs the key is to treat the discipline as a craft not a science. Building theory, applying results in unique and meaningful ways and being focussed on practical outcomes is more important and more reflective of sound practice then militant adherence to a significance test.

P.S. For those interested in understanding how to use statistics as a craft to formulate applied solutions I strongly recommend this book http://www.goodreads.com/book/show/226575.Statistics_As_Principled_Argument

P.P.S. This article just out http://www.theguardian.com/science/head-quarters/2014/jan/24/the-changing-face-of-psychology . Seems that there may be hope for the discipline yet.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , | 2 Comments

Welcome Onboard. Tips for Staff Recruitment by Dr. Sarah Burke

I estimate there will be a lot of ‘first days’ for staff in January 2014, if the volume of assessment testing for recruitment that we did leading up to Christmas is anything to go by.  But consider these facts:

•       Half of all senior external hires fail within 18 months in a new position;

•       Almost 1/3 of all new hires employed for less than 6 months are already job searching;

•       According to the US Dept of Labour, a total of 25% of the working population undergoes a career transition each year.

This level of churn comes at a cost. Estimates of direct and indirect costs for a failed executive-level hire can be as high as $2.7 million (Watkins, 2003).  And for each employee who moves on, there is many others in the extended network – peers, bosses, and direct reports whose performance is also influenced.  One of the important ways that HR can positively impact on this level of churn is through the strategic use of a process known as onboarding.

What is Onboarding?

Employee onboarding is the process of getting new hires positively adjusted to the role, social, and cultural aspects of their new jobs as quickly and smoothly as possible. It is a process through which new hires learn the knowledge, skills, and behaviours required to function effectively within an organisation. The bottom line is that the sooner we can bring people up to speed in their roles and wider organisation, the more expediently they will contribute.

Conventional wisdom is that a new hire will take approximately 6 months before they can meaningfully contribute (Watkins, 2003).  I suspect that for most organisations, a 6 month lag time before seeing a return on a new hire is untenable, particularly in the NZ economy when 97.2% of us employ less than 20 staff (MBIE Fact Sheet, 2013).  One of the important ways that HR can accelerate the adjustment process for new hires is by having an onboarding programme that is given a profile inside the business, and supported by key staff.

While the specifics of an onboarding programme can vary organisation to organisation, the below is offered as a guide for HR managers to proactively manage their onboarding efforts.  Please review my presentation Welcome Onboard for more direction in terms of supporting staff in the initial days, weeks, and months of their employment.

 Top Tips for Supporting Staff Onboarding:

  • Make good use of the pre-start to get the workspace organised, to schedule key meetings, and for sharing useful organisational and team information (i.e., team bio’s, blogs, key organisational reading).
  • Give your onboarding programme a brand/logo/tagline that communicates the experience and gives it importance/profile.
  • Customise your onboarding programme to reflect individual need; onboarding is not a one-size fits all.
  • Personalise the first day, including a formal announcement of entry
  • Create an onboarding plan detailing key projects, firsts, objectives, and deliverables that are expected by your new hire.
  • Monitor progress over time using milestones; 30 – 60 – 90 – 120 days up to 1 year post-entry.
  • Identify 2-3 quick wins that your new hire can take responsibility for in order to build credibility and establish momentum (note: a quick win must be a meaningful win, not necessarily a big win).
  • Involve your new hire in projects that will require working cross-functionally.
  • Include organisational role models as mentors and coaches.  Remember a relatively small set of connections is far better than a lot of superficial acquaintances.
  • Be prepared to provide initial structure and direction to your new hire.  Remember, most people if thrown in the deep end to ‘sink or swim’ will sink.
  • Use technology to facilitate the onboarding process, including the flow of information.
Posted in Organisational Culture | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

2014: Exploring the Myths of I/O Psychology a Month at a Time

For those that may not be aware, the ‘Science of Science’ is in disarray. Everything is currently under the microscope as to what constitutes good science, what is indeed scientific and the objectivity and impartiality of science. This is impacting many areas of science and has even led to a Noble Prize winner boycotting the most prestigious journals in his field.

Nobel winner declares boycott of top science journals: Randy Schekman says his lab will no longer send papers to Nature, Cell and Science as they distort scientific process.

This pervading problem in the field of science is perhaps best covered in the highly cited Economistarticle ’How Science Goes Wrong’

This questioning of science is perhaps no more apparent than in our discipline of I/O Psychology. Through various forums, and academic and non-academic press, I have been made increasingly aware of the barrage of critical thinking that is going on in our field. The result: much of what we have taken to be true as I/O psychs is nothing more than fable and wishful thinking.

Over this year I want to explore one myth each month with readers of our blog. They will be myths about the very heart of I/O Psychology that are often simply taken as a given.

The idea of attacking myths has long been central to OPRA’s philosophy:

 And there are many myth busting blogs in this forum:

To kick off this new series I wish to start with the current state of play in the field. In particular, the fundamental problem that questionable research practices often arise when there is an incentive for getting a certain outcome.

John, L.K., Loewenstein, G., & Prelec, D. (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science OnlineFirst.  A description of the study with comments at: http://bps-research-digest.blogspot.co.nz/2011/12/questionable-research-practices-are.html

This results in a well-known fact to all in the publish or perish game that your best chance of getting published is not necessarily the quality of the research but rather is correlated with a null hypothesis not being supported (i.e. you have a ‘eureka’ moment, however arbitrary).

Fanelli, D. (2010) “Positive” results increase down the hierarchy of the sciences. http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0010068

Gerber, A.S., & Malhotra, N. (2008) Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociological Methods & Research, 37, 1, 3-30.

The output is that the bulk of the research in our area is trivial in nature, is not replicated and simply does not support the claims that are being made. This is especially the case in psychology where the claims often go from exaggerated to the absurd.

Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology (http://dx.doi.org/10.1016/j.jmp.2013.02.003 ), Earlyview, , 1-17.

Scientific methods of investigation offer systematic ways to gather information about the world; and in the field of psychology application of such methods should lead to a better understanding of human behavior. Instead, recent reports in psychological science have used apparently scientific methods to report strong evidence for unbelievable claims such as precognition. To try to resolve the apparent conflict between unbelievable claims and the scientific method many researchers turn to empirical replication to reveal the truth. Such an approach relies on the belief that true phenomena can be successfully demonstrated in well-designed experiments, and the ability to reliably reproduce an experimental outcome is widely considered the gold standard of scientific investigations. Unfortunately, this view is incorrect; and misunderstandings about replication contribute to the conflicts in psychological science. …… Overall, the methods are extremely conservative about reporting inconsistency when experiments are run properly and reported fully.

The paucity of quality scientific research is leading to more and more calls for fundamental change in how what qualifies as good science and research in our field.

Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K., Gerber, A., Glennerster, R.,Green, D., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B.A., …, Simonsohn, U., & Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343, 6166, 30-31.

“There is growing appreciation for the advantages of experimentation in the social sciences. Policy-relevant claims that in the past were backed by theoretical arguments and inconclusive correlations are now being investigated using more credible methods. ….Accompanying these changes, however, is a growing sense that the incentives, norms, and institutions under which social science operates undermine gains from improved research design. Commentators point to a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results are published more easily than null, replication, or perplexing results”.

D.C., Levine, J.M., Mackie, D.M., Morf, C.C., Vazire, S., & West, S.G. (2013). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice. Personality and Social Psychology Review, EarlyView, , 1-10.

In this article, the Society for Personality and Social Psychology (SPSP) Task Force on Publication and Research Practices offers a brief statistical primer and recommendations for improving the dependability of research. Recommendations for research practice include (a) describing and addressing the choice of N (sample size) and consequent issues of statistical power, (b) reporting effect sizes and 95% confidence intervals (CIs), (c) avoiding “questionable research practices” that can inflate the probability of Type I error, (d) making available research materials necessary to replicate reported results, (e) adhering to SPSP’s data sharing policy, (f) encouraging publication of high-quality replication studies, and (g) maintaining flexibility and openness to alternative standards and methods. Recommendations for educational practice include (a) encouraging a culture of “getting it right,” (b) teaching and encouraging transparency of data reporting, (c) improving methodological instruction, and (d) modeling sound science and supporting junior researchers who seek to “get it right.”

Cumming, G. (2013). The New Statistics: Why and how. Psychological Science, EarlyView, , 1-23.

We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include pre-specification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.

But it is not all doom and gloom. There are simple steps that the Scientist/Practitioner can take to make sure that sense and sensibility is more pervasive in the field.  In this regard I offer 3 key simple principles:

  1. Try your best to keep up to date with the literature: OPRA will do their best to publish relevant pieces that come to their attention via this blog!
  2. Don’t make exaggerated claims: Remember that no one has ‘magic beans’ as ‘magic beans’ do not exist. Dealing with human problems invariably involves complexity and levels of prediction that are far from perfect.
  3. Accept our discipline is a craft not a science: I/O Psychology involves good theory, good science, and sensible qualitative and quantitative evidence but is often applied in a unique manner, as would a craftsman (or craftsperson – if such a word exists). Accepting this fact will liberate the I/O Psychologist to use science, statistics and logic to produce the solutions that the industry, and more specifically, their clients require.

Keep an eye on our blog this coming year for exploring myths and other relevant information or products related to our field. Let us know if something is of interest to you and we can blog about it or send you more information directly.

Posted in I/O Psychology, Miscellaneous, Uncategorized | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Emotionally Intelligent Leadership

Emotionally intelligent leadership:

Game changing for business, life changing for people.
By Ben Palmer

If you are a leader in business looking to improve your organisation’s performance you might want to consider improving your capacity to identify, understand and manage emotion, that is, your emotional intelligence. A wide number of research studies over the last decade have shown there’s a direct link between the way people feel and they way people perform in the workplace. For example, research conducted by the Society for Knowledge Economics here in the Australian labour market, found people in high performing workplaces typically feel more proud, valued and optimistic than those in low performing workplaces. Conversely, people in low performing Australian workplaces people typically feel more inadequate, anxious and fearful. Leadership is fundamentally about facilitating performance. Research on emotional intelligence has proven that a leader’s emotional intelligence is key to their capacity to facilitate emotions in employees that drive high employee engagement and performance.

To illustrate this point Genos International, part owned by Swinburne University (a human resource consulting company that specialises in the development of leaders emotional intelligence www.genosintenrational.com), together with Sanofi (the worlds fourth largest pharmaceutical company www.sanofi.com) teamed up to investigate whether the development of sales leaders emotional intelligence would improve the amount of sales revenue generated by their sales representatives. In order to control for market influences Sanofi randomly placed 70 sales representatives (matched in terms of tenure and current performance) into two groups:

1.The control group, this group and their managers received no emotional intelligence development training) and
2.The development group, the managers of this group participated in Genos International’s award winning emotional intelligence development program.

The Genos development program involves an emotional intelligence assessment for each person before and after the program (to create self-awareness and measure behaviour change) together with a number of short, focused development sessions over a six month period on:

  1. How to improve your capacity to identify emotions, and 
  2. How to improve your capacity to effectively regulate and manage emotions

Development in these areas makes leaders more self-aware, more empathetic, more genuine and trustworthy, more personally resilient, and better at influencing others emotions. Ultimately it helps leaders make their employees feel more valued, cared for, respected, informed, consulted and understood. On average, the emotional intelligence of the sales managers improved by 18 percent. As can be seen in the graph below this helped facilitate, an on average 13% improvement in the Development Group’s sales performance in comparison to the Control Group’s. There was a 7.1% improvement in the first month following the program, a 15.4% improvement the month after and 13.4% improvement the month after that (as measured by retail sales revenue by territory). The revenue of the Control Group stayed flat an in the same revenue band during this period.

Capture

The improvements in revenue generated by the Development Group returned approximately $6 dollars for every $1 Sanofi invested in the program. The findings of the study have been published in a peer-reviewed journal which can be downloaded from the Genos website (http://static.genosinternational.com/pdf/Jennings_Palmer_2007.pdf).

Feedback from the participants showed the program not only helped improve the sales performance of reps and their managers. It also helped them improve their relationships with each other. At the time employees were navigating a difficult time within the business as bumps from a merger were ironed out and two different company cultures integrated. As one participant put it I have seen improvements in behaviour that have increased the bottom line with sales reps. From a management perspective, increased skills that have lead to more buy-in, acceptance, spirit improved, and better communication. However the greatest benefit I received from the program was an improved relationship with my 14yr daughter”.

This participant feedback highlights the added benefits of improving your emotional intelligence. Your capacity to identify, understand and manage emotions contributes to your life satisfaction, stress management and the quality of your relationships at home and at work. That’s why developing your emotional intelligence can be game changing for your business, and life changing for you and your people.

 To improve your skill at identifying and understanding emotions you can:

  1. Stop and reflect on the way you feel in the moment. Take the time to label the feelings you are experiencing and reflect on the way they might be influencing your thinking, behaviour and performance.
  2. Become more aware of other characteristics that interplay and indeed cause you to experience emotions such as your personality, values and beliefs. By understanding these you can become better at identifying different emotional triggers and they way you (and others), typically respond to them. This awareness is key to adjusting the way you feel and respond to events.

  To improve your skill at managing emotions you can:

  1.  Eat better, sleep more, drink less and exercise (if you aren’t already).
  2. Adopt a thinking oriented emotional management strategy, like Edward Debono’s 6 thinking hats, use it when strong emotions arise.
  3. Adopt a relationship strategy, someone who’s great at listening and helping facilitate perspective on events.
  4. Search the app store, there are some great emotional management apps out there today. For example Stress Doctor, a revolutionary mobile app that helps you reduce your stress level in just 5 minutes via a biofeedback technique to help sync your breathing rate with your autonomous nervous system (ANS).

If you would like more information on Enduring Impact Leadership Training please contact auckland@opragroup.com.

Posted in Emotional Intelligence, Engagement, Leadership | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment