Thomas Otter: HR systems must lose flight risk indicators
Dr Thomas Otter is a leading advisor for emerging HR tech vendors and their investors, guiding them to build better products and be more successful. Read his column every month at UNLEASH.
Why You Should Care
Dr. Thomas Otter claims individual flight risk indicators are at best nonsense, at worse discriminatory.
He argues most of the algorithms that vendors use to do flight risk projection haven’t got any basis in science.
Flight risk indicators should be clamped down on by ethics committees and regulators.
If there was one data item I wish to expunge from HR systems it is the individual flight risk indicator.
You have all seen the demo, from almost every vendor. The nifty icon next to the employee highlighting whether they are a flight risk or not. Sometimes a traffic light, sometimes, measured to two decimal places. The presales consultant will probably wax lyrical that this is powered by AI.
At best it is nonsense, at worse it is discriminatory. Why we use the same term as is used by the courts to grant or deny bail is beyond me.
Why flight risk indicators must go:
- The idea that by looking at the data lurking in the HR system you can accurately predict the likelihood that someone will leave the company is roughly the same as using their star sign to predict the same. Scorpios are more likely to switch companies, whereas Leos tend to stay for the long haul.
- Much of the data in the HR system isn’t actually up-to-date anyway.
- Most of the algorithms that vendors use to do flight risk projection haven’t got any basis in science. See point 1.
- Risk compared with what control group? All employees, employees in a similar role, similar grade, similar age, same city, country, etc?
- Flight from what? The job, the manager, the department, or the company?
- Most of the algorithms aren’t really doing any ML/AI either. Not that ML/AI really helps here.
- Survey data is likely to be more helpful in predicting flight risk trends, but only if employees actually respond, and respond honestly. In some organizations the culture is such that employees don’t feel the psychological safety to answer surveys truthfully.
- Deep learning sounds promising until it hits on age, gender, religion, health, and race as potentially useful predictors. You are likely to fall foul of all sorts of labor discrimination legislation if you make promotion or comp decisions based on AI/ML that profiles based on this sensitive data. This will also vary by country. If the vendor can’t actually explain the algorithm, you have other risks than flight risk to worry about.
- The data outside the HR system about the person is far more useful to predict flight risk. Is the employee’s family rich? Have their kids left university? Did they sell bitcoin at the right time? Has their spouse won the lottery? Have they moved cities in the past? Do they have aging parents nearby? Do they have season tickets to the local football team? Are they in a long-distance relationship? Do they have a chronically ill child? Do you really want to be collecting and analyzing that sort of data? Do they dislike the body odor of the colleague they share the hot desk with?
- I thought I was happy in my job but once my line manager told me I was I flight risk I realized that I had been underpaid for the last 5 years, so I pinged the recruiter that I had been ignoring.
- Manager A saw in the system that John was a high flight risk, so he increased his bonus, at the expense of Mary’s, who has two kids at college and really needs the job for healthcare.
- Manager B saw in the system that John was a high flight risk, assumed that he was going to leave anyway, so he decided to save the RSU grant for the replacement hire.
- When a vendor says the tool can predict 61% of employee departures, it means that is wrong every three guesses. This is only a little better than a random guess at 50%.
- Most solutions won’t have considered GDPR in their design.
- Most solutions won’t have considered labor law in their design, especially non-US labor law.
- Just curious, did the individual flight risk scores change when you announced your post-Covid-19, back to the office policy?
Vendors:
If you have an ethics committee how on earth did they agree to let you build this?
If you don’t have an ethics committee, get one.
Instead of trying to develop carbolic smoke balls that attempt to predict and prophesize individual behavior, the ML and analytics teams at software vendors should spend their time looking at performance in the aggregate.
Which departments are likely to have the higher flight risk, based on survey data, aggregate demographic data, external job demand, unplanned absence, etc.
Use analytics to assess and improve manager and team performance, rather than trying to second guess individual choices. There is so much opportunity to apply analytics and ML technologies in HR.
Remember the law of large numbers. Remember that correlation isn’t causality. Remember the law of unintended consequences. Be a bit more modest in what you reckon ML can really do.
End users:
Beware of any tool that promises to predict individual behaviour.
Calculate your own personal flight risk score using the vendor’s algorithm, is it accurate? Ask the vendor to explain the science behind their flight risk predictions.
Regulators:
Clamp down on this nonsense.
Thomas Otter is a leading advisor for emerging HR tech vendors and their investors, guiding them to build better products and be more successful. Read his new column every month at UNLEASH. Visit, otteradvisory.com
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!

Founder & CEO
Thomas advises leading and emerging HRTECH vendors and their investors, guiding them to build better products.