Is there an algorithm to predict the likelihood of an individual sustaining an injury? More than one. Types of software are on the market that can predict specifically who is at greatest risk. (Note: This is a different form of predictive analytics from software that analyzes accident trends, observations and near misses to identify where the next accident is likely to occur.) “Targeting analytics” produces risk scores assigned to any number of workers. Computations are based on analyzing thousands upon thousands of incidents, workers’ comp claims, exposure data, near misses, safety observations and audit findings, among other data subsets, and then correlating them to individual histories. And the calculations are longitudinal, with data studied over period of years to find risk factors.
Risk clues
What clues of risk are these targeting algorithms identifying? The variables are many, depending on the software employed: How often is the worker absent? Or clocking in late? Has he or she been written up for not wearing fall protection equipment? How many traffic violations have been accumulated by the worker? Has the worker witnessed trauma? Worked excessive overtime? Worked with chemicals? How many complaints about the worker have been filed? How many times has the worker been disciplined? Is the worker regularly exposed to heat stress conditions? Is the worker about to take a vacation?
Algorithm codes can scan far and wide. Is the worker compliant with rules and regulations? How about organization skills – good or bad? Spatial understanding – good or bad? Does the worker’s personality and behaviors mesh with the company culture?
Also today, artificial intelligence and machine learning can analyze thousands of verbal or video interviews to identify who is risk-tolerant and risk-intolerant. Who is lazy and who is a self-starter. Who uses active, positive verbs like “can do” and who uses negative words like “can’t” and “have to.” Who disses an ex-boss or co-workers with betraying microexpressions?
Public digital footprints
A person’s public digital footprint can be read like an open book by analyzing social media data. What does their LinkedIn profile say about a person’s relationships, their word choices, how they describe themselves and past experiences? What about their Facebook and Twitter posts? Forget the photos of spring break debauchery back in college; algorithms have ruled them out as being predictive of anything. But have posts talked about drugs, race, medications, suicidal thoughts or threats of self-harm?
This is all a bit unnerving. A machine determines that 45 employees at a worksite have high risk scores that place them in the at-risk category. Interventions follow (maybe). An at-risk high scorer might be transferred to a less dangerous job for his or her own good. Maybe they’re sent back for more training. Or assigned a mentor or coach. Or more closely followed day-to-day by their supervisor. Maybe they’re brought in for counseling, or advised to seek out an employee assistance program. One way or another, those machine-identified at-risk workers are under a microscope.
The upside
To be sure, these calculations can and will save lives. They have proven efficient at identifying those at-risk of potential suicide, and helping HR identify the best candidate matches for a new hire. But there is also the rest of the story -- too much focus and emphasis placed on behaviors. Behavior-based safety learned to balance behaviors with culture and environment years ago. OK, certain behaviors on the surface are the leading causes of accidents – overexerting oneself; taking shortcuts; being distracted; rushing; not looking where you’re going; not wearing PPE; breaking rules; keeping a messy, untidy work area that can lead to slips, trips and falls.
But what factors often underlie these behaviors? Ask the “Five why’s.” It’s a long list: organizational failings, system and design traps, cultural pressures and norms, poor job planning, poor job design, sub-standard equipment, manpower levels, forced OT, poor leadership, lack of EHS investments, process instability, unexpected changes, non-routine work, peer pressure, pressure from the boss to produce, to get product out the door, to meet quotas.
Algorithms that can predict work injuries leave interventions up to humans. Often the actions taken will not be costly upstream controls and system defenses. It’s easier and cheaper to blame the behaviors, label an individual as at-risk, and find fault with the worker with the high risk score. Algorithms are not coded to take into account Dr. James Reason’s assertion that the best way to prevent accidents is to take a system approach. Look at error traps in the workplace that lead to accidents, rather than take a personal approach and focus on human fallibility, which is inevitable.
Algorithms don’t take into account Dr. Dan Petersen’s belief that human error occurs not because humans are stupid or clumsy, though of course some of us are, but usually errors occur because management systems and designs and decisions have trapped workers into error. That reasoning is left for humans to digest.
Deeper causes
If a worker overexerts because they’re given a heavier workload, or rushes and slips and falls because the boss wants the job done ASAP, or if a worker takes a shortcut because the culture condones shortcuts as “the way things are done around here,” that’s not a reflection of his or her risk score based on days absent or number of traffic violations or having more than an hour commute. The root causes go deeper than that.
Algorithms, like humans, are not foolproof. As they become more and more a part of everyday life, their conclusions should be questioned. Why is that person taking shortcuts? Peer pressure? Why are they absent or tardy more? Is there a sick spouse at home? Why did their micro facial expression betray a flash of contempt for a former boss? Maybe the boss was incompetent. Maybe those positive keywords in the LinkedIn profile were snatched from a “LinkedIn for Dummies” guide.
Algorithms aren’t everything. There still is the need for human judgment and actions. Consider context, keep the bigger picture in mind. If an individual’s risk score is high, ask why. Don’t assume predictions are infallible. A useful tool, sure; the “magic bullet,” no.