Why HR doesn’t Count – The Touchy Feely side of Human Resources

Why HR doesn’t Count – The Touchy Feely side of Human Resources

A hallmark feature of high performing businesses is the commercial value that the Human Resources department contributes to the organisation. In such organisations, HR has a seat at the boardroom table and even junior HR personnel are highly attuned to the commercial drivers of the business. For example they can quote staff attrition rates and the return on investment (ROI) ratios for the most recent leadership development program. In short, they know how their role contributes to saving or making money for their organisation. In most organisations however, HR is perceived as a cost centre and serves little more than an administration function and these HR personnel struggle to communicate how they make a difference to an organisation. Which type are you?

So do organisations create high performing HR or does a certain type of HR professional help create high a performing organisation?

Through a series of lectures recently presented by OPRA at an Australian University, 100 HR Graduates were profiled using the Jung Type Indicator (JTI) published by Psytech, which is an equivalent of the Myers-Briggs Type Indicator (MBTI). The results highlighted that a significant proportion of HR Graduates profiled as being ESFJs. That is, they tend to be more outgoing and get their energy from engaging with others (Extraversion) which is useful for connecting with staff. They profiled as being quite detailed oriented (Sensors) which is ideal for developing polices and procedures and for attending to contractual or payroll issues. Furthermore, the results indicated that they are likely to respond at an intuitive level when dealing with situations, which means they are likely to be quite empathic when engaging with people (Feeler). Finally, the profiling revealed that they tend to be organised and more methodical in their approach (Judger), which is likely to translate to quality work and projects being completed.

However, it is the inherent nature of certain personality characteristics of a HR professional which might limit their capacity to make it to the boardroom. For example, profiling as a Sensor means they are naturally less likely to consider the broader or more strategic issues of an organisation (Sensors versus iNtuitors). Furthermore, the results suggest that HR professionals may be more interested in maintaining harmony, rather than critically evaluating information relating to the commercial issues of the business (Thinkers versus Feelers).

This idea was recently put to the test when OPRA was engaged to assess candidates for the position of HR Director with an ASX100 listed company (NB the 15FQ+ which is a very robust personality tool was used instead of the JTI). Even when reviewing resumes there were two clear types of candidates. There were those who quoted facts and figures, relating to strategic business optimisation projects and cost reduction initiatives. There were other candidates who did not communicate their value proposition in terms of commercial achievements, yet instead listed detailed roles and responsibilities.

Through this recruitment process OPRA also profiled the existing executive team, including the CEO and the CFO. The profile of the executive team highlighted a clear tendency to be more strategic and commercial in their approach (iNtuitive and Thinking). In the end, the personality profile of the preferred or ‘short listed’ candidates, as determined by the Panel of Board Members, was balanced overall, suggesting the capacity to be quite agile, while there was a clear propensity to critically analyse information and display sound commercial acumen. The Executive Panel was counting on the preferred candidate making a commercial and measurable difference to the organisation!

The implication for HR professionals is that a greater focus needs to be placed on assessing, translating and communicating their commercial value. OPRA’s Program Evaluation Framework should be at the forefront of HR’s ‘thinking’ before and after delivering programs, whether they be focused on recruitment initiative, leadership or outplacement. The key points of the framework include:

  1. The Purpose or Logic of the Program

This is the most critical stage of any HR initiative as it requires clarity around the purpose, the goals of the program, the desired outputs of the program, and the activities which will help to achieve this. For example, the overarching purpose for a leadership program might be to increase profitability.


  1. Analysis of Information

Given the intended program outcomes, what information will help inform the success of the program? For example, change in performance, reduction in absenteeism, percentage change in leadership capabilities etc. 


  1. Stakeholders

This stage of a program evaluation requires the identification and engagement of stakeholders to provide input regarding the program. Key stakeholders are likely to include Program Sponsors, Participants, Managers, Peers, Subordinates, Suppliers etc. This stage will also help inform who should receive feedback about the program. 


  1. Data Collection

What methodologies will be used to assess the program outcomes? The evaluation of a leadership program for example may include 360 degree survey results, performance data, interview feedback from stakeholders, survey results etc.  


  1. Reporting / Communication

The reporting of the program evaluation is generally the final step, but one which is often given limited attention. The typical output for most program evaluations include a formal report and presentation of the program results which is intended for the Program Sponsor. Additionally, a summary report may be developed for a more generalised audience, such as a one page case study. A communication plan should also be considered so as to disseminate the relevant information in the most appropriate and effective way.

In closing, it is hoped that this article helps to challenge the ‘thinking’ of HR Professionals regarding how they communicate their value to organisational performance. It’s time to start counting the value, not the cost of HR.

Posted in Uncategorized | Leave a comment

Usefulness Trumps Validity

Validity is perhaps one of the most misunderstood concepts in HR analytics and psychometrics in particular. This is a topic that I have previously written about on this blog but the message has yet to fully resonate with the HR community. The most common question that OPRA gets asked in relation to any solution we sell, be it an assessment, survey or intervention, continues to be “What is the validity?”

On the face of it this is a perfectly reasonable question. However when probed further, it becomes clear that there remains a gap in understanding what validity translates to in terms of business outcomes. The answer to this question is invariably a validity coefficient rolled off the tongue that then suffices some checkbox prescribed for decision making.

Instead of asking about validity, the real question should be “How useful is this assessment or intervention?” This leads to the more focused question of how useful this assessment or intervention is for the problem (or problems) wished to be solved.  Thus the question is not one of simply producing a number to represent artificial validity criteria but is reframed to the business imperative of usefulness.

Ironically this in no way limits the rigour within which an assessment, intervention or survey will be evaluated. On the contrary, the bar now becomes far higher. The reality is that obtaining suitably respectable correlations such as r=0.3 between some measure and an outcome is not particularly difficult. What is far more difficult is to put this figure in context of a system to in turn determine usefulness.  Fortunately there are frameworks that help in this thinking.

One in particular that I’m drawn to is the Key Evaluation Checklist developed by Professor Michael Scriven. This concept has been furthered by more recent work in the field.  The point being that usefulness combines statistics, logic and systems thinking to establish the merit, worth and significance of what is being evaluated. This provides a far more systematic way of thinking about validity and provides an applied framework for determining usefulness.

Likewise, when we are interested in truly understanding construct validity, the standard correlations between like measures simply does not suffice. What is required is to understand the nomological network in which the construct exists and demonstrate not only what it correlates with (convergent validity) but also what it can be discriminated from (discriminant validity). In this way we build a deeper understanding of the construct of interest, and in turn, increase our understanding of its usefulness.

In the world of HR analytics and big data we sometimes forget that usefulness, not validity, is our ultimate goal. Number crunching is but a tool in this process. What is more important is an understanding of the framework in which usefulness will be demonstrated. When good analytics meet good applied thinking and methodologies, progress will ensure. But to confuse validity as an end-point is to render any HR analytics project irrelevant to adding value to an organisation.

Posted in Uncategorized | 1 Comment

The 360 Story – Introduction to 360s

The 360 Story – Introduction to 360s

360-Degree surveying is a popular way for organisations to evaluate performance, assist employee development, and support talent management processed. By one estimate, multi-source feedback such as a 360 surveying is used in 90% of Fortune 1000 organisations, and this trend is reflected across a broad range of organisational industries and sizes.

Although a 360 can be seen as a one-stop-shop, the process must be handled with care to ensure the outcomes are positive and meaningful. To help ensure sustained developmental change among participants there are some key points to keep in mind. Before embarking on a 360-Degree process it is important to ask yourself:

1. Why are we doing a 360-Degree survey and what are our desired outcomes?

It is important that there is a clear understanding across the business as to why the 360 is being done. The 360 process needs to be transparent from the very beginning. Without understanding the goals of the process, there will be little change and development as a result.

2. Is our organisation ready for a 360 process? How will we gain buy-in and support our staff along the way?

This can be a tough question to answer, but for any 360 process to be meaningful, individuals need to be open to giving and receiving feedback. It might mean that to get the best from the process you need to have evaluators complete some feedback training so that they provide constructive and helpful feedback and advice. This can also help avoid the 360 becoming an opportunity for people to air their general grievances. This also links into a key action point for 360s- they must be well communicated in terms of intentions and the ongoing consequences and action points. Finally, having the time and infrastructure to support staff is imperative. Although having objective third parties undertake coaching from 360s can lead to better outcomes, this cannot be at the expense of having recognition within the organisation.

3. Who are the most appropriate participants and evaluators?

Participants should be willing rather than forced to take part. As well, some thought should go into who the evaluators should be. They should be asked in advance to receiving a completion link. They should be someone who is comfortable with providing accurate, constructive, and honest feedback, and have seen the participant operate across a range of different situations and over a reasonable amount of time.

4. When is the most appropriate time to conduct the 360?

360s are time consuming, and so other operational requirements should be kept in mind when deciding to complete a 360. Being aware that some evaluators will have multiple evaluations to complete is also important. Allowing them the time in work hours to complete will help ensure buy in and completion. As well, 360s can reflect of the overall organisational environment, so be aware of how change could affect perceptions. 

5. How will you support people in ensuring sustained growth and prolonged outcomes?

360s offer a lot in terms of potential growth, however, before embarking on a 360 process it is important that consideration is given to how this will be managed. 360s are a process, not a singular event. Ongoing coaching and support is needed, along with consideration of individual goal setting and performance requirements.

For more information about how 360s might be useful for your organisation, please get in touch with your local OPRA office.


Morgeson, F.P., Mumford, T.V., and Campion, M.A. (2005). Coming Full Circle – Using Research and Practice to Address 27 Questions About 360-Degree Feedback Programs. Consulting Psychology Journal: Practice and Research, 57(3), 196-209

Posted in Uncategorized | Leave a comment

Leaving Value – Exit Interviewing as a Strategic Intervention

Leaving Value – Exit Interviewing as a Strategic Intervention

Despite their reputation, exit interviews are not a waste of time. But there’s a catch.

A common complaint from organisations is that exit interviews are a waste of time, effort, and money. The reason for this is that they are simply done as part of the checklist for any exiting employee. Box ticked. Job done. But therein lays the problem. Exit interviews are only as useful as the information gained, but this is just one part of the puzzle. Simply getting information achieves very little. Real value comes from the information being applied.  While the old adage says knowledge is power, it would be more apt to say, knowledge applied is invaluable. If this is the case with exit interviews, two key questions need to be answered: how can organisations ensure that they are getting accurate insights from exiting employees and how can the information gained be used more strategically?

How to gain more accurate insights from exiting employees

 When an employee leaves an organisation, they could have a number of reasons why they might withhold or distort information they provide in an exit interview. In particular, research has highlighted the lack of personal benefit, fear of repercussions, social desirability, and the presumption that honest answers are a waste of time and effort as the information won’t be used anyway. Herein lies a catch 22 situation: employees are more likely to be honest if they’ve seen action being taken as the result of exit interviews. However, taking action based on inaccurate information is risky at best.  Getting to this point is likely to be a work in progress, but some simple suggestions to bridge the gap include:

  • Ensuring confidentiality and allowing anonymous responses to reduce risk of retaliation (contracting out can help ensure this)
  • Not providing individual reports to managers, instead grouping by division, or aggregating organisation-wide responses on a quarterly or annual basis
  • Give exiting employees the option to answer in person, over the phone, or via survey. Each method has its pros and cons, but the option should be put to the employee to decide what they are most comfortable with to ensure the most accurate disclosure
  • Increase transparency by demonstrating for employees how exit information is facilitating change in the organisation (e.g. provide annual updates on changes made as a result of employee suggestions)

How to use Exit Interviews more strategically

 What is becoming more apparent is that individual exit interview results are meaningless, simply acting as file-fillers. To add value, it is necessary for interview results to be combined, analysed, interpreted, and presented as trends, identifying patterns in the data. From here, it is imperative that action is taken from the results to drive meaningful change; otherwise they are nothing more than a symbolic gesture.

  • Make exit interviews more than a symbolic gesture: commit to using data to identify and solve issues
  • Know how and why data is being collected and communicate this clearly to all employees
  • Have a framework within which exit interview findings can be applied
  • Separate involuntary leavers from those leaving voluntarily. While those who are made redundant or who are dismissed may simply use the exit interview as an opportunity to air their grievances, it may offer insight into how to avoid similar situations in the future.

Reflection Points

 What are you hoping to gain from exit interviewing?

Although exit interviewing is often done as part of the course, it is important to align with organisational goals and aspirations. Using the data positively and proactively should contribute to overall success and the bottom-line in terms of reduction in turnover costs. However, it is important to note that solutions cannot happen in isolation; rather, they need to be considered alongside organisational culture and be viewed as a process of improvement. Without that future focus, exit interviews may not have as much value as they could.

Do you have time to ensure strategic use and change?

Outsourcing might be an option to improve the chances of honest responses. Alternatively, independent analysis of results can remove the time burden from HR allowing them to focus on achieving their set goals. 

Is it a case of too little, too late?

Oftentimes hiring good employees and keeping them are two vastly different actions. That said, addressing turnover, should start at selection, attracting the best staff and looking into the future, considering retention factors.  Selecting employees that are likely to be high performing should go beyond simple job requirements and look to culture fit and motivations.  Once employees are settled in the organisation, it is worthwhile getting an understanding of how they’ve been socialized, whether their expectations are being met, and what support they might need going forward. Post-appointment interviewing offers a way of tracking trends across the employee life-cycle so that issues can be addressed before they push individuals out of the organisation.

Regardless of whether exit interviews are conducted in-house or contracted out, there are some easy and practical steps that organisations can take to help improve their exit processes and ensure the safety and comfort of all involved. For more information about how you can address turnover and use exit interview information more strategically, please get in touch with your local OPRA office.

 Further reading:

Allen, D. G., Bryant, P. C., & Vardaman, J. M. (2010). Retaining talent: Replacing misconceptions with evidence-based strategies. Academy of Management Perspectives, 24(2), 48-64.

Carvin, B. N. (2011). New strategies for making exit interviews count. Employment Relations Today, 38(2), 1-6.

Frase-Blunt, M. (2004). Making exit interviews work: Properly collected and analyzed data can provide valuable insight into employees’ attitudes. HR Magazine, 49(8), 109-113.

Posted in Uncategorized | Leave a comment

Ethics in our profession – Finding more questions than answers

Ethics in our profession – Finding more questions than answers

In my profession, I have often advocated practicing within the limits of my competencies and urged others to do the same. I preached, with near religious fervour that “competent professionals will what they don’t know; incompetent ones will be eager to impress you with how much they do.”

I often reflect on this position that I take, priding myself as operating within the boundaries set by my professional code of ethics. At the same time I also question if this was merely a clever ruse to hide my own incompetence and inadequacies. I have not found that answer to date. I suspect that many professionals, in psychology or otherwise, struggle with the same question in the lifespan of their careers.

Whether it is a guideline for competent practice, or a benchmark to define the true professional, ethics has been the foundation for many professions that offer services to the fellow man. It dates as far back as 400 B.C. when the Hippocratic Oath was written for the medical profession. Since then, many other professions have built on those philosophical roots to define guidelines to evaluate the right from the wrong, the good and the bad.

Today, guidelines defined by the established professional bodies such as the Australian Psychological Society (APS), British Psychological Society (BPS), and the American Psychological Association (APA) are referred to as the benchmark for ethical practice in psychology. Proper post-graduate education is not complete without a module in ethics, filled with heated discussions of case dilemmas and references to the different societies’ guidelines. My education in ethics left me with more questions than answers but it did make me a discerning and cautious professional.

In my relatively mundane career, ethics guided my decision-making process – refer to the legislation, the code of ethics, the guidelines in your organisation if any, and then make your best professional decision. There was always some form of structure or doctrine to fall back on.

Recently that decision-making framework fell into a bit of disarray when I read that the APA changed its ethical guidelines to support interrogation techniques that have since been labelled as torture. The report also pointed out that the APA did it in collusion with the Pentagon and the Central Intelligence Agency, as a means to secure the foothold of the psychological profession in a post-9-11 America. As the debates and counterarguments over this report reached a flurry in cyberspace, I found more disturbing (professionally not graphically) commentary regarding certain big names within the industry.  (I leave the audience to read further into these issues and form their own opinions).

If the morality of the rule-makers are brought to question, where does that leave us? In the moral struggle of our professional careers, if the beacon of ethical practice which we depend on crumbles, where does that leave us?

Perhaps to expect an infallible ethical code is overly idealistic. In an idealistic world one might imagine a definitive guide that can resolve all moral problems that confront us, and every professional’s primary emphasis would be protecting the public interests. Such guidelines should ideally reflect our profession’s moral integrity.

In reality, however, it seems to gravitate towards a state of protectionism, of risk management, and often a statement of political compromise. Just as I reflected on the risk of my ethical preaching becoming more of a self-defence mechanism, I reflect now on a similar paradox in our ethical codes. Is it there to minimise the risk we may place the public in, or to protect the individuals under its care from precarious ethical or legal circumstances?

The eminent psychologist Calvin Hall (1952) argued that an ethics code is bound to be filled with ambiguities and omissions, and plays into the hands of those who toe the line. Professionals operate in the grey area of what is not explicitly expressed or what can be loosely interpreted. Looking at the allegations against the APA in this instance, it worries me that not only are players trying to work the loopholes in the rule book, the rule makers may be intentionally planting loop holes for players.

Still, in consideration of a fair counter-argument – even if the allegations were undisputedly true, does it not comply with the ethical hierarchy? The interests of the profession and the interests of society does come before the interests of the client. In the case of ethical dilemmas, issues of terrorism are deemed to be special circumstances that circumvent our usual guidelines in order to protect public interests and maintain social order. Does this draw us into the ethical trap of being consequentialist? Does the greater issue of national security justify the twisting of the guidelines to its purposes; or in this case re-writing the guidelines?

I find myself, yet again, with more questions than answers.

Perhaps ethical perfection is a pipe dream, but I trust we should at least strive to be ethically proper. Legislation or professional guidelines are after all crafted by selected groups for selected purposes. The right professional decision relies on the judgement of the professional at the end of the day.

In closing, while I unable to share any answers, I will share these core principles that have guided my professional behaviour since I first learnt them in graduate school:

  1. Doing no harm – to eliminate or minimise potential for damage.
  2. Respecting autonomy- respect the rights of individuals to decide how to live their lives.
  3. Benefiting others – decisions should have potential of positive effect on others.
  4. Being just – actions should be fair.
  5. Being faithful – act with fidelity, loyalty, truthfulness, and trustworthiness.
  6. According dignity – View others as worthy of respect.
  7. Treating others with care and compassion – be considerate and kind.
  8. Pursuit of excellence – maintaining competence, doing one’s best.
  9. Accepting accountability – Act with a consideration of possible consequences, accept responsibility for action and inaction.

If like me, you question if your ethical principles are driven by genuine public interests or self-serving deceit, consider this – you can smoke your way through points 1 through 8, but point 9 will still catch up with you eventually.

For more reading on the APA issue, see the following links:





For additional reading on professional ethics, consider the following chapters taken from some great books on the topic:

Koocher, G. P., and Keith-Spiegel, P.(1998). On being an ethical psychologist. In Ethics in Psychology, Professional Standards and Cases (pp. 3 – 26). New York: Oxford University Press.

Hall, C. S. (1952). Crooks, codes, and cant. American Psychologist, 7, 430-431.

Bersoff, D. N. (1999). Ethics Codes and How They are Enforced. In Ethical Conflict in Psychology (4th Ed). Washington, DC: APA.

Steinman, S. O., Richardson, N. F., & McEnroe, T. (1998). The Ethical Decision-Making Process. In The Ethical Decision-Making Manual for Helping Professionals. (pp. 17 – 32). Belmont, CA: Brooks-Cole.

Posted in Uncategorized | Leave a comment

Data is an ingredient not the meal: 5 key things to think about to begin turning data into information

Unless you have been shut off from the outside world in recent times you are probably aware that big data is one of the current flavours of the month in business. As an I/O psychologist I’m particularly interested how this concept of big data is impacting thinking about people problems in companies. Indeed, a common request for information that is made to OPRA, whether that is Australia, New Zealand or Singapore, is for help with supposedly big data projects. The irony is that many of these requests are neither primarily about data nor involving big data sets.  Rather what has happened is that the proliferation of talk on big data has made companies realise that they need to start incorporating data into their people decision.

Big data itself is nothing new. OPRA were involved in what could be described, in a New Zealand context, a big data project in the 1990’s attempting to predict future unemployment from, among other variables, psychological data to help in formulating policy on government assistance.   What is new is the technology that has made this type of study far more accessible, the requirement for evidenced based HR decisions, and the natural evolution of people analytics to being a core-part of HR.

While data has value to solving organisational problems it is not a solution in and of itself. This point is lost on many clients who think that just by having data that the solution to problems such as retention, selection success and training evaluation will somehow reveal itself. As captured in the title to this blog-post data is not the solution. Rather data is simply one ingredient to better understand a problem being faced. However to solve problems with data we must first know how to use the data ingredient correctly to form decisions.

  1. Define what you want to cook and whether the meal is worth eating: Start with a research paradigm and define the problem that you want to assess. Before we can begin to chuck data at a problem we need to have identified the problem first. A classic research paradigm will help in this regard. At a basic level start by thinking about the problems you wish to solve, map the antecedents and consequences to that problem and generate some hypotheses to be tested.
  2. Make sure you have all your ingredients: Having mapped the problem you then need to check what variables you have data for and what data is missing. Where are the gaps in your understanding to the problem? Where are you currently missing data to solve the problem that you want to solve? These are the types of questions you need to be able to answer prior to starting any analysis. Where there is a gap you need to look at how to collect data such that there is no glaring omission to your analysis.
  3. Make sure your ingredients are ready for processing. Not all data is equal. Before you can even begin to look at working with your data you need to make sure the data is fit to be analysed. For example, for many statistical operations there is an assumption that your data will be normally distributed and you will be able to differentiate people against this model. This is often not the case. A common example is performance data which is often positively skewed. Making your data fit for purpose is vital before you begin willy-nilly chucking statistical operations at the data in the hope of finding something conclusive.
  4. A simple dish is often most easily consumed: In previous posts we have discussed the idea that the best solutions to problems are the often the simplest. There are levels of sophistication that can be applied to data analysis but this does not mean that we must always adopt the most complicated analysis. On the contrary the purpose remains to solve a problem and this can often be achieved using qualitative and quantitative techniques to ultimately tell a story. As reiterated throughout this blog-post data is but an ingredient. Look first at simple techniques to try and tell a story such as graphing, simple inferential and descriptive statistics and simple multivariate models. Never forget that the purpose is not to be blinded by statistics but to use statistics to see more clearly.
  5. The proof is in the pudding. Doing the analysis is one thing but the findings need to be evaluated. Evaluation is far more than a measure of statistical significance it is looking at the practical significance of the findings. Will the findings have an impact on the organisation? Is this difference between these two divisions large enough to really make a difference? Even if this intervention worked is there a cheaper way of getting to the same outcome. These questions will not be solved by statistics alone but require an evaluative framework such as the key evaluation checklist to use the data to make decisions.

The skill of working with data is now a requirement of the strategic HR professional’s tool kit. Not surprisingly this model of working from understanding the problem to evaluation is central to the OPRA Consulting model. While starting to work with data, and big data in particular, can be daunting at first once some basic fundamentals are understood this fear can be alleviated. As with cooking you may still need a qualified chef to make sure everything is on track. However once you have some fundamentals are grasped there is a whole raft of dishes you can cook for yourself. At worst you will develop enough of a palate to know what to order and appreciate the end product.

OPRA offer both HR data analytical services and training for HR professionals working with data. If you are interested in more information about any of these offerings please don’t hesitate to get in contact with your local OPRA office.

Posted in Uncategorized | Leave a comment

In Defence of the Scientific Method

I recently listened to a podcast interview with Dr Adam Gazzaley, a neuroscientist and Director of the Gazzaley Lab at UC San Francisco.  While the work of Dr Gazzaley is both interesting and practical, the real take away for me from the podcast was to reconfirm my commitment to the scientific method. This is not to be mistaken for a belief in science, which throughout recent years I have become more and more disillusioned with. Rather, it is to avoid any notion of chucking the baby out with the bathwater and make clear the distinction between the flawed practice of science and the body of techniques that comprise the scientific method.

The scientific method dates back to the 17th century and involves the systematic observation, measurement and experimentation, and the formulation, testing and modification of hypotheses (cf.  https://en.wikipedia.org/wiki/Scientific_method). While not wishing to go into the history of the development of the scientific method, the applications of these principles have since been the basis for societal development. The refinement of this thinking, by the likes of Karl Popper, together with a multi-disciplinary approach with the appropriate use of logic and mathematics, is central in our search for truth (using the term loosely).

The problem is that the discipline of science is continually letting itself down. It allows itself to be compromised, therefore tarnishing the scientific method in the process. This issue has been discussed extensively in this blog. A recent article in the Times Higher Education added to the growing body of work questioning the usefulness of published scientific research.

The failure of science to live up to its own ideal allows detractors to question the scientific approach and replace this with opinion, faith and case studies of an n=1 to be the new standard of knowledge. This is such a terrible shame as it detracts from one of the true breakthroughs in the human story.

The message of this blog is simple. The practice of science in modern society deserves to be questioned and the failure is systematic from universities (refer to previous post) through to publications (refer to previous post). This failure is, in the main, a by-product of the commercialisation of all things science. However, rather than being the fault of the scientific method, it is the scientific method that provides a lens by which these failures can be seen most clearly.

Having been involved in building a business in I/O psychology, I understand the role and requirement of marketing. Understanding this is a key part of partner and staff induction. If I/O psychology is going to progress however our discipline must still adhere as much as possible to the scientist/practitioner model (refer to previous post), guided by adherence to the scientific method.

Humanity owes a lot to the scientific method. We must never allow this to be forgotten during the critique of the current practice of science. The method must be given the respect it deserves. To paraphrase Dr Gazzaley, this means that to be based in science is not our benchmark. Adherence to the scientific method with rigour and objectivity, noting all limitations, is the litmus test by which we need to measure ourselves as I/O practitioners.

Posted in Uncategorized | Tagged , , | Leave a comment