Wednesday, September 2, 2015

How HBR got assessments wrong

Was it the 90° heat?  Had everyone left for vacation?  Did a disgruntled sub-editor feel like sticking it to the readers?  I have racked my brains but still cannot come up with a good reason why HBR published Peter Bregman’s error-filled piece about assessments on August 19.

Bregman claims that assessments are basically a bag of inconsistent tricks that add nothing to talent management. Here are five big errors in his argument:


Error 1: All assessments are essentially the same

Bregman begins with a cute anecdote about taking the Meyers-Briggs Type Indicator (MBTI) test with his college girlfriend.  He didn’t enjoy the experience and points out – quite rightly – that MBTI is neither a valid nor a reliable  predictor of work performance.  But then he says all other personality assessments are just as bad.

Sorry, that’s just not true.

Just as there are good tools and bad tools, there are good assessments and bad assessments.  Good assessments predict future performance accurately and consistently.  Bad assessments don’t.  

How do you tell the good from the bad?  Look at three things: does the assessment focus on something that has been shown to be relatively stable; has this factor been proven by extensive research to predict work performance; and is the assessment itself a valid and reliable measure of this factor. 

Let’s be clear, a bad assessment may have other uses.  You wouldn’t use an axe to carve a diamond, any more than you’d use a jeweller’s saw to cut down a tree.  Many people find MBTI a helpful start in increasing their self-knowledge, or a fun part of social interaction (It sure beats “What’s your decorating style?” as a way to think and talk about your co-workers).  But as an assessment tool to help people decide how they could perform at work, it’s a non-starter.  

Want to see what good predictive assessment looks like?  Click here.

Error 2: Self-assessments just reinforce self-image

Bregman’s second mistake is his claim that any assessment completed by an individual about him/herself is inherently biased. 

First of all, it depends on the type of assessment.  With cognitive ability tests, for instance – cognitive ability is a very strong predictor of work performance – it’s obvious that an individual’s responses are not determined by self-image, and that self-completion of an assessment is the only sensible way to get a valid result.   And there are some assessments – work culture assessments, for example, or profiles of work interests – where the assessment is looking at an individual’s internal preferences and comparing them to the job.  The best and most direct way to understand someone’s preferences is to ask specific, work-targeted questions and compare those preferences to what actually goes on in the job/organization. 

There’s a difference, though, between what you like and what you are like.  This is where Bregman’s arguments fall down.  Good personality assessments – assessments of internal characteristics rather than preferences or mental processing power – are designed to get behind self-image and reveal the truth about how individuals think, behave and go about their work. 

These good personality assessments get at that truth in various ways.  The most important is the construct validity of the assessment – how well its results correlate with external measures of the same thing.  Questions are carefully designed and tested to ensure their results are consistent with other questions already proved to accurately measure the specific personality factor.  And good assessments almost all contain cheat-catchers, to unmask users who are trying to create a specific set of results rather than answering the questions honestly – for obvious reasons, I’m not going to go into how those work.

Constructing a good personality assessment is therefore complex and relies on years, sometimes decades of research to make sure it is a reliable test and not simply a reflection of an individual’s self-image.  If further proof was needed, just take one of these good personality assessments yourself.  Most of us find some uncomfortable truths as well as confirmation of what we already knew.   And, even more importantly, the research shows that these assessments predict performance much more accurately than any other method.

Error 3: People can’t be reduced to a test result

Bregman rejects assessments on the grounds that people are too interesting, too complex and too constantly in flux to be summed by a single assessment.  The John Doe I worked next to this morning is not the same as the John Doe I talk to over lunch, and his infinite variety makes any attempt to measure his contribution at work an impossible task. 

It’s hard to know where to start with this one, since Bregman quite clearly does not mean what he says – later on in the article he gives a vivid account of a group exercise he led where team members simplify their human complexity down to just five factors.  It seems his problem is not with reducing complexity per se, but with reducing complexity via an assessment. 

It’s worth pointing out that no assessment – no good assessment – would ever claim to reveal the whole truth about an individual.  But certain assessments and, even more powerfully, certain combinations of assessments, have been shown to predict performance at work with very, very high degrees of reliability, as Harvard Business Review above all knows.  This may not be the whole truth about a person, but it is an important, relevant and very reliable truth.

In practice, too, no assessment is ever used as a stand-alone verdict in the way Bregman suggests.  Assessments are used in combination with other evidence, such as interviews or college transcripts or recommendations from previous employers.  Which leads us on to Bregman’s next error, that assessments close down inter-personal curiosity.

Error 4: Assessments choke curiosity

“As soon as we label something,” Bergman writes, “our curiosity about that thing diminishes.  Personality assessments are a shortcut to getting to: I know. And once we know something, we’re no longer curious.”

I’m not sure I’d agree.  And that’s not how I’ve seen assessments work in practice.  If anything, the result of good assessments is not dead-end labeling but more questions, both from the individual who is being assessed and from the potential employer looking at the assessment. 

The great thing about these follow-up questions is that they are targeted on the areas most likely to impact someone’s performance in the job.  An employer might think, Hmm, this person has really high potential competency for Leadership.  Let’s talk to them more about that at interview.  An individual might wonder, I thought I’d come out higher in terms of Networking, but when I look back on my experiences, maybe I haven’t been doing all I can to develop broad business connections – I guess I need to think about working on it more.

Of course, any assessment can be misused, but assessment makers take great pains to stop this from happening.  Good assessments (like good science papers) talk about probabilities, about potential, about the real-life implications of what the assessment reveals.  A typical output of the assessments my company provides to employers is an interview guide personalized to each candidate, based on the assessment results.  We developed this because we found that employers wanted help in asking the right questions to build on the information they received in the assessments.  Not only did they find the assessments give them rich and deep information about each candidate, they found that this information made them want to know more. 

Truly, it doesn’t make sense to believe that knowledge diminishes curiosity.  Just think about the most knowledgeable people you know:  are they more or less curious than the average bear?

Error 5: People know better than assessments

Bregman would probably approve of my last remark, as the final point he makes in his piece is that people-to-people interactions and reflections are far more accurate, more comprehensive and more useful than any assessment.

Sadly, the evidence often proves the opposite. 

Few of us consider ourselves biased, but too often we see what we expect to see when we look at a coworker, or at a potential hire.  We favor tall people, blondes, men without facial hair and women who wear makeup.  None of these factors has anything to do with how well people perform at work, but has an uncomfortably significant impact on how likely someone is to get hired, or be well-paid. 

Malcolm Gladwell’s example of the impact blind-auditioning had on orchestras is too well known to need repeating here, but it is worth checking out an update on this issue 
he posted after Blink came out.

I don’t want to sound like Gil Grissom from CSIbut we need to look for the evidence.  Hiring and assessment are too important to trust to our proven-unreliable unstructured impressions.  

HBR, how could you?

I guess I’m still as confused as when I started out about why a serious publication like HBR published this article.  At base, I think Bregman’s heart if not his head is in the right place.  He sounds an excellent facilitator and a powerful motivational speaker.

But he confuses the how of people development with the what.  It’s great to get coworkers to come together in a safe and bounded environment to reflect on how they work.  But the focus has to be the real drivers of work output – the personality factors, ways of working and thinking, cognitive abilities and work culture and motivational match that have been shown to predict high results far more reliably and powerfully than anything else.  If, instead, people work on developing the characteristics they believe are important, they are likely to at best be wasting their time and at worst unwittingly sabotaging the business.

HBR has made its name by highlighting serious research and going deep into some of the most important issues impacting the world of work.  Why on earth did it publish this piece?