Tuesday, September 22, 2015

Don't hire the perfect candidate


Everyone dreams of hiring the perfect candidate – someone who’s already a star performer, who has the ideal background, qualities and qualifications for the job.  Job descriptions tend to be explicitly based around perfect candidate descriptions, and ATS systems filter candidates based on how well they match this keyword summary of perfection.

There’s only one problem: hiring the perfect candidate never works out. 

Here’s why…

Not every kind of perfect matters

One of the biggest mistakes in hiring is looking at too many factors.  Not only is it hugely complex and time-consuming to pull together different sorts of data, but much of the information commonly gathered in the search for a perfect candidate is actually destructive to your chances of ending up with a high-performing employee.

It’s all about relevance.  If you base your hiring decisions on irrelevant data, you may as well write each candidate’s name on a piece of paper, drop them all into a glass bowl and pick one at random, Hunger Games style.  Letting in any irrelevant data will weaken the reliability of your candidate selection, skewing your results further towards the chancy end of the predictive scale.

Let’s look at an example.  Most recruiters salivate if the résumé of a candidate who has already done the job in question lands on their desk.  But when you look at the research into high performance, you’ll find that prior experience of the job is a pretty weak performance predictor, with a correlation of only 0.13 with future success .  It’s better than using graphology (the correlation coefficient there is 0.02) but it’s nothing like as high as the 0.71 or better correlation you get if you focus on the most highly predictive factors.  

No wonder nearly half of all new hires are gone within eighteen months, if their recruitment was based around such weakly predictive selection methods.

But some other common selection tools are even worse than the résumé.  How relevant to someone’s job performance do you honestly think their Facebook photos are going to be?  How relevant is their height or hair color?  Their face shape?  Their answer to a question asking them to describe their closet?  

There’s worrying evidence that these sort of obviously irrelevant things are often taken into account in candidate selection, but there’s not a single research study out there that shows any link between these factors and genuine high performance.

We’re all biased

Why, then, do we continue using these random factors when we’re choosing future employees?

Because we’re human, and humans aren’t always logical.

We all have treasured values and beliefs, some of them so deep we hardly know how to put them into words.  But we have no problem putting them into action.  Malcolm Gladwell tells a lovely story about a screened orchestra audition where the head of the Munich Symphony was so enraptured by the performance of a French Horn player that he leapt up and yelled out “We’re hiring that man!”, only to nearly faint from shock when the screen was removed to reveal a woman. 

The whole reason the Munch Symphony was conducting screened auditions was because the head of the orchestra wanted to make sure there was no gender bias in hiring.  I’m sure if you had asked him in advance whether a woman could be as strong a French Horn player as a man, he would have agreed vigorously.  Yet when it came down to it, he heard an excellent French Horn player and immediately associated a bunch of other qualities with that individual, including the possession of a Y chromosome.

It’s easy to laugh at such anecdotes, but we are all biased.  If you think you are not, just take one or two of these short tests which reveal the implicit prejudices many of us have on a whole range of issues.  Choose the tests that focus on areas where you are sure you are not prejudiced, for maximum impact.

How come we’re all so prejudiced?  It’s not – usually – because we’re terrible people.  Many scientists suspect instead that many common prejudices simply reflect outdated thinking.  Back in the Stone Age, for instance, it might have made sense to choose the biggest person in the tribe to be the leader.  But as the nature of leadership challenges morphed from “Kill the saber-toothed cat before it eats the baby” to “Improve shareholder value”, height simply became less relevant.  Maybe in another few thousand years our instinctive reactions to leaders will have changed, by which time perhaps leadership itself will demand very different qualities.

We also develop biases based on the information we receive.  Most of the stories we hear about salespeople feature charming, back-slapping characters, the type of fun and energizing person whom everyone likes to be around, the life and soul of the party.

The thing is, evidence from high performing salespeople in real jobs shows exactly the opposite.

It turns out that great salespeople, particularly for high-value sales, tend to be introverts.  They listen more than they talk.  They get their energy not from being in the thick of social interactions but from reflecting and planning alone.  Glad-handing does not result in higher sales.

I could cite multiple examples from other fields of work that prove the same point: when it comes to looking for perfect, most of us see only what we want to see.  Our gut feel in hiring is often (even usually) wrong.

Perfect candidate ≠perfect employee

There’s another reason why you should never set out to hire the perfect candidate, even if you focus only on factors with proven relevance to performance and rigorously take steps to eliminate your own biases. 

It’s because the perfect candidate is very, very rarely the perfect employee.

Of course every business wants great employees.  But the way to get them is not to look for perfection in a candidate, even if you are only considering the factors whichare proven-relevant to performance on the job.

The reason you don’t need perfection is that two different types of factor predict job performance.  The first group are Baselines, the second Differentiators.

Baselines: pass/fail courses

Baselines are the technical skills, knowledge or qualifications that a candidate has to have to be credible in a specific job – a clean driver’s license for a chauffeur, knowledge of HTML for a website programmer, Series 7 and Series 63 qualifications for a stock broker.  Every job has its own specific baselines, and in many jobs if you don’t have the baselines, you can’t even get your foot in the door.

But baselines only take you so far.  When researchers looked at the differences between top performers and the rest, they found very little evidence that superior baseline mastery predicts superior work performance.  Some top performers have high-level baseline skills, it’s true, but others scraped through at the third or fourth attempt.  It seems that baselines work like pass/fail courses in college – what matters is covering the ground, not whether or not you excel.

Differentiators: what it says on the can

Differentiators are, well, different.  They tend to be more complex constructs than Baselines – behavioral and thinking competencies such as Strategic Thinking, for example, rather than a Baseline like passing the GMAT.  They encompass not just an individual’s capabilities but also his or her preferences and motivations.  They dig beneath the surface of technical skills to profound truths about how people solve problems, how they work with others, how they get things done.

Differentiators genuinely do differentiate performance.  Research over decades has shown strong correlations between the level of mastery of a particular Differentiator and success in at work.  Each job has its own set of predictive Differentiators, corresponding to the consistent differences research has found between the best and the rest.  The more proactive and decisive a salesperson is at work, for example, the better his or her results. 

While different things matter for different jobs, there are three broad Differentiators that are highly predictive of performance:
  • Cognitive ability – how you process information and solve problems
  • Competencies – the ways of working that lead to high performance
  • Culture fit – how well the working environment engages and motivates individuals

Every job will draw on a different mix of these three Differentiators.  Some have high requirements in terms of cognitive ability, for instance, while in others success is driven much more by competencies.  There is no reliable way to guess these requirements; you have to look at real data from high performers, or use assessment methods which have already incorporated such data.

The real trick is to know what is a baseline and what is a differentiator, and measure them differently.  If a factor is a baseline, just make sure your candidate checks the box.  If it’s a differentiator, look for depth of mastery and make sure it is relevant for your particular job. 

Check your perfect

So, instead of hiring an all-round perfect candidate, focus on just two things: the baseline requirements and the specific differentiators for that particular job, and measure them differently as outlined above.  This doesn’t add up to many factors – maybe ten in total.  Most can be easily and accurately measured by assessments and a half-hour focused interview.  Using this approach will cut the time and effort that goes into hiring, and get you, if not the perfect candidate, then as-close-as-you-can-get-to-perfect employees.

The trick is to resist the temptation of perfection.  It’s tempting to think that we should consider all the factors, that we should trust our gut feel, that we should always look for more and more data.  This only leads to more work for worse results.  Don’t do it.

Also, remember that people can change.  You aren’t going to get a perfect future employee.  You’re going to get someone who has great potential strengths, but might need to develop more competence in a couple of areas.  Then it’s a matter of what the person wants to do – are they willing to develop the mismatched characteristics, do they genuinely want the job and are they ready to start work?  So long as you get someone with good potential in the most important drivers of performance in the job, and with the motivation to work hard to be really good, you can work on the details.  You will probably increase their engagement by doing so – high performers usually want to learn from each job, and what better learning than increasing their capability to do well?

Nobody’s perfect, and in the end that’s a very good thing for employers and employees.

Tuesday, September 8, 2015

Is hiring all in the brain?


Applying brain-imaging science to hiring sounds cool and cutting edge – exactly the sort of thing we ought to be doing in the 21st century.  High-profile startups argue that neuroscientific recruitment and selection is more strongly predictive, more reliable, less biased and easier to implement than traditional methods.  

But can brain imaging research really be applied practically in the workplace?  And can it solve America’s multi-billion-dollar mis-hire problem?  

Let’s take a look at the evidence…

Hiring today: a $$$ no-brainer

One thing everyone can agree on: hiring today needs some serious fixing.  Almost half of new hires don’t last even eighteen months in the job.  The costs of these thousands of mis-hires – and they happen even at highly regarded companies – are frightening.  

No other core business process would be allowed to get away with these kinds of inefficiencies.  It’s time to get hiring to work.

First, though, we have to understand why hiring isn’t delivering great results.  It is unlikely to be because of under-investment – hiring costs continue to rise year-on-year  and are up by 7% in 2014.  Much more likely is that too much time and money is being spent on activities that do not accurately predict performance in the job.

Hiring, after all, is making a prediction – choosing in advance the best performer from a pool of potential candidates.  There has been a lot of research into which factors predict success at work, and many common selection inputs such as the résumé, the traditional interview, a candidate’s years of experience have been shown to be at best weakly reliable pointers to future success.  

Of course, it is not only recruiters who are getting things wrong.  Not every job candidate has great self-knowledge.  It’s easy to get seduced by the idea of being a leader, for example, even if you rarely demonstrate the characteristics and abilities that typify great leadership in action.  And if we know ourselves only partially, we hardly know jobs at all.  Most of us have limited and distorted ideas about what different jobs and organizations are really like, about which factors reliably drive success and which are less relevant.

Neuroscientists are convinced they can do better.  They point to the scientific research behind their approaches and to the ease and accuracy of assessment.  Let’s take a look at what they do:

The case for neuroscientific hiring

We may not yet have the technology to create the World’s First Bionic Man, but using fMRI (functional magnetic resonance imaging) and other new approaches we can see with increasing clarity what’s popping in our brains.  

Researchers have peered inside people’s heads while they play games, make decisions, solve problems and experience emotions.  They have also looked at the interaction of brain processes with physical actions – how people’s faces or pulse rates change when experiencing specific emotions, for example.  From these experiments they have found evidence for four broad categories of neurological activity: mental processing speed and accuracy, memory, executive control, perception and social cognition. 

Companies have taken these research findings, and the experiments which reveal them, and used them to assess job and career fit.  Candidates take anything from two to twelve assessment exercises that feel like 1980’s videogames or lab experiments.  They get feedback, which is sometimes quantitative (“You solved the problem faster than 60% of our peers”) and more often qualitative (“You’re risk-averse”; “You’re a quick thinker”).  Results are matched against jobs and careers, based on profiles determined by an employer and/or by data from similar jobs.   Typically, an individual is given career recommendations and perhaps development suggestions while an employer gets presented with the profiles and contact details of candidates who match well to the job.

Neuroscience companies claim that this approach solves key problems with the hiring process: assessments that rely on candidates’ limited and often inaccurate self-knowledge, assessments that can be gamed by savvy candidates, and assessments that are subject to bias on the part of recruiters and hiring managers.  By using brain-games, they say, they can get at the real truth about a candidate to help individuals go beyond their prejudices to find the right career and help employers identify the right, high-performing new hire. 

Does it work?

Before we look at the specific claims made for neuroscientific hiring, I have a more fundamental question: does what is revealed by the neuroscience tests genuinely predict performance at work?

The evidence is mixed.  When we compare neuroscience data to other research linking ways of thinking to work outputs, we find some overlap, but also some differences.  Cognitive ability – mental processing speed and accuracy – is a big area of neuro-investigation and has been shown to be highly predictive of future work success.  But the evidence is much less clear when it comes to other factors.  We can measure someone’s short-term memory capability quite accurately, for example, but it’s much less clear how important short-term memory is for success at work, or success in particular kinds of work, or how it operates outside the calm, one-on-one conditions of a laboratory experiment.  It may well be that short-term memory ability really does distinguish the best from the rest, but so far nobody has proved it.

There is also an issue of whether the tests are measuring what the researchers think they measure.  It’s a fascinating idea that we can peer into someone’s brain and see what they are thinking – including which celebrities they obsess over  – but the reality of brain imaging is a little more complex.   

Take emotional states.  Recent meta-analysis has found no reliable evidence that the brains of people experiencing an emotion all react in the same parts, or in the same way.  Brain scans of people experiencing fear, for instance, show different patterns and intensities of electrical activity.  Back in 1996 Daniel Goleman coined the catchy term Amygdala Hijack to describe an overwhelming fight-or-flight response, but more recent research has suggested that he got it wrong. 

Only a quarter of studies since 2009 showed an increase in amygdala activity during fear, and many studies showed amygdala activity increasing during non-emotional thoughts and experiences.  Even more significantly, individuals whose amygdalae have been destroyed can often still experience full emotional lives.  The seductive idea that we can measure electrical activity in the amygdala and thereby discover the intensity of someone’s terror is just not true.  People may experience the fight-or-flight response, but that experience does not happen only or perhaps at all in their amygdalae.

These mistakes variations in the experimental data exist because the brain is extraordinarily adaptive.  Different parts of our cerebral cortex cantake on different functions well into adulthood.  The reality – so far as we currently understand it – is that the brain is a bunch of multi-purpose networks that come together in a variety of ways to make our minds and bodies work.  Mapping those networks and pathways is work still to be done, and perhaps needs more advanced imaging technology to be feasible.

It’s not just brain imagery that has been often misinterpreted, but all kinds of other physical-response-derived neuroscience data.  We may kid ourselves we can recognize lying and emotions by micro-analyzing facial expressions, but the evidence just isn’t there.   Big-data comparisons of facial analysis research studies find no consistent emotional facial expressions – different people experiencing the same emotion will show a range of expressions (and a range of other physical symptoms such as heart rate). 

Given the uncertainties around the research on which these games and tests are based, and the lack of solid evidence linking their results to actual successful performance in specific jobs, they cannot demonstrate that they get around candidate self-ignorance or lying, or that they are less subject to bias than other recruiting methods.  You may believe they are better, but there is simply no evidence either way to justify that belief.

What really works

When it comes to something as important as hiring, we really should trust the evidence and not our prejudices.  There have been decades of research into predicting work performance, and the right combination of assessments can get you correlations with future performance of over 0.7 – an outstandingly high predictive value which some selection experts believe can be increased to over 0.8 for certain roles when supplemented with the right mix of focused interviewing and other techniques.

Of course, assessments don’t sound quite as cool as neuroscience games (and neuroscience gaming is a real market competitor with the likes of Candy Crush, never mind the fact that earlier claims around developing brainpower or protecting against Alzheimer’s have been proved false).  But would you rather have a fun recruiting approach, or get genuinely business-transforming results? 


I know which I’d want used when it came to my career.

Wednesday, September 2, 2015

How HBR got assessments wrong

Was it the 90° heat?  Had everyone left for vacation?  Did a disgruntled sub-editor feel like sticking it to the readers?  I have racked my brains but still cannot come up with a good reason why HBR published Peter Bregman’s error-filled piece about assessments on August 19.

Bregman claims that assessments are basically a bag of inconsistent tricks that add nothing to talent management. Here are five big errors in his argument:


Error 1: All assessments are essentially the same

Bregman begins with a cute anecdote about taking the Meyers-Briggs Type Indicator (MBTI) test with his college girlfriend.  He didn’t enjoy the experience and points out – quite rightly – that MBTI is neither a valid nor a reliable  predictor of work performance.  But then he says all other personality assessments are just as bad.

Sorry, that’s just not true.

Just as there are good tools and bad tools, there are good assessments and bad assessments.  Good assessments predict future performance accurately and consistently.  Bad assessments don’t.  

How do you tell the good from the bad?  Look at three things: does the assessment focus on something that has been shown to be relatively stable; has this factor been proven by extensive research to predict work performance; and is the assessment itself a valid and reliable measure of this factor. 

Let’s be clear, a bad assessment may have other uses.  You wouldn’t use an axe to carve a diamond, any more than you’d use a jeweller’s saw to cut down a tree.  Many people find MBTI a helpful start in increasing their self-knowledge, or a fun part of social interaction (It sure beats “What’s your decorating style?” as a way to think and talk about your co-workers).  But as an assessment tool to help people decide how they could perform at work, it’s a non-starter.  

Want to see what good predictive assessment looks like?  Click here.

Error 2: Self-assessments just reinforce self-image

Bregman’s second mistake is his claim that any assessment completed by an individual about him/herself is inherently biased. 

First of all, it depends on the type of assessment.  With cognitive ability tests, for instance – cognitive ability is a very strong predictor of work performance – it’s obvious that an individual’s responses are not determined by self-image, and that self-completion of an assessment is the only sensible way to get a valid result.   And there are some assessments – work culture assessments, for example, or profiles of work interests – where the assessment is looking at an individual’s internal preferences and comparing them to the job.  The best and most direct way to understand someone’s preferences is to ask specific, work-targeted questions and compare those preferences to what actually goes on in the job/organization. 

There’s a difference, though, between what you like and what you are like.  This is where Bregman’s arguments fall down.  Good personality assessments – assessments of internal characteristics rather than preferences or mental processing power – are designed to get behind self-image and reveal the truth about how individuals think, behave and go about their work. 

These good personality assessments get at that truth in various ways.  The most important is the construct validity of the assessment – how well its results correlate with external measures of the same thing.  Questions are carefully designed and tested to ensure their results are consistent with other questions already proved to accurately measure the specific personality factor.  And good assessments almost all contain cheat-catchers, to unmask users who are trying to create a specific set of results rather than answering the questions honestly – for obvious reasons, I’m not going to go into how those work.

Constructing a good personality assessment is therefore complex and relies on years, sometimes decades of research to make sure it is a reliable test and not simply a reflection of an individual’s self-image.  If further proof was needed, just take one of these good personality assessments yourself.  Most of us find some uncomfortable truths as well as confirmation of what we already knew.   And, even more importantly, the research shows that these assessments predict performance much more accurately than any other method.

Error 3: People can’t be reduced to a test result

Bregman rejects assessments on the grounds that people are too interesting, too complex and too constantly in flux to be summed by a single assessment.  The John Doe I worked next to this morning is not the same as the John Doe I talk to over lunch, and his infinite variety makes any attempt to measure his contribution at work an impossible task. 

It’s hard to know where to start with this one, since Bregman quite clearly does not mean what he says – later on in the article he gives a vivid account of a group exercise he led where team members simplify their human complexity down to just five factors.  It seems his problem is not with reducing complexity per se, but with reducing complexity via an assessment. 

It’s worth pointing out that no assessment – no good assessment – would ever claim to reveal the whole truth about an individual.  But certain assessments and, even more powerfully, certain combinations of assessments, have been shown to predict performance at work with very, very high degrees of reliability, as Harvard Business Review above all knows.  This may not be the whole truth about a person, but it is an important, relevant and very reliable truth.

In practice, too, no assessment is ever used as a stand-alone verdict in the way Bregman suggests.  Assessments are used in combination with other evidence, such as interviews or college transcripts or recommendations from previous employers.  Which leads us on to Bregman’s next error, that assessments close down inter-personal curiosity.

Error 4: Assessments choke curiosity

“As soon as we label something,” Bergman writes, “our curiosity about that thing diminishes.  Personality assessments are a shortcut to getting to: I know. And once we know something, we’re no longer curious.”

I’m not sure I’d agree.  And that’s not how I’ve seen assessments work in practice.  If anything, the result of good assessments is not dead-end labeling but more questions, both from the individual who is being assessed and from the potential employer looking at the assessment. 

The great thing about these follow-up questions is that they are targeted on the areas most likely to impact someone’s performance in the job.  An employer might think, Hmm, this person has really high potential competency for Leadership.  Let’s talk to them more about that at interview.  An individual might wonder, I thought I’d come out higher in terms of Networking, but when I look back on my experiences, maybe I haven’t been doing all I can to develop broad business connections – I guess I need to think about working on it more.

Of course, any assessment can be misused, but assessment makers take great pains to stop this from happening.  Good assessments (like good science papers) talk about probabilities, about potential, about the real-life implications of what the assessment reveals.  A typical output of the assessments my company provides to employers is an interview guide personalized to each candidate, based on the assessment results.  We developed this because we found that employers wanted help in asking the right questions to build on the information they received in the assessments.  Not only did they find the assessments give them rich and deep information about each candidate, they found that this information made them want to know more. 

Truly, it doesn’t make sense to believe that knowledge diminishes curiosity.  Just think about the most knowledgeable people you know:  are they more or less curious than the average bear?

Error 5: People know better than assessments

Bregman would probably approve of my last remark, as the final point he makes in his piece is that people-to-people interactions and reflections are far more accurate, more comprehensive and more useful than any assessment.

Sadly, the evidence often proves the opposite. 

Few of us consider ourselves biased, but too often we see what we expect to see when we look at a coworker, or at a potential hire.  We favor tall people, blondes, men without facial hair and women who wear makeup.  None of these factors has anything to do with how well people perform at work, but has an uncomfortably significant impact on how likely someone is to get hired, or be well-paid. 

Malcolm Gladwell’s example of the impact blind-auditioning had on orchestras is too well known to need repeating here, but it is worth checking out an update on this issue 
he posted after Blink came out.

I don’t want to sound like Gil Grissom from CSIbut we need to look for the evidence.  Hiring and assessment are too important to trust to our proven-unreliable unstructured impressions.  

HBR, how could you?

I guess I’m still as confused as when I started out about why a serious publication like HBR published this article.  At base, I think Bregman’s heart if not his head is in the right place.  He sounds an excellent facilitator and a powerful motivational speaker.

But he confuses the how of people development with the what.  It’s great to get coworkers to come together in a safe and bounded environment to reflect on how they work.  But the focus has to be the real drivers of work output – the personality factors, ways of working and thinking, cognitive abilities and work culture and motivational match that have been shown to predict high results far more reliably and powerfully than anything else.  If, instead, people work on developing the characteristics they believe are important, they are likely to at best be wasting their time and at worst unwittingly sabotaging the business.

HBR has made its name by highlighting serious research and going deep into some of the most important issues impacting the world of work.  Why on earth did it publish this piece?