How do we know that work is being performed to set standards? If there is a difference, how do we measure it?

This begins an important conversation about drift, or, more scientifically, normalized deviation. At a high level, drift is nothing more than performing a task either below or above a set standard. In practical terms, drift is performing below standards. But there are times when performing above a set standard can be a problem as well.

Oftentimes, drift is written off as complacence. However, there is generally more going on. As a quick aside, complacency is not a legitimate root cause – hopefully this can be discussed later. When we learn a new task, our view of risk is in line with the actual level of risk involved. Whether we know it or not, by using a procedure (or rule-base) to complete a task, we are accounting for the actual risk in the task assuming the procedure is written correctly. After completing this task many times, we tend to begin our drift.

When the same task or step is completed successfully many times, we tend to start cutting corners – not out of malicious intent, rather because we can be more efficient. By cutting corners, we have now reintroduced risk back into the task that the procedure was designed to remove. We now have a flawed barrier, a piece of Swiss cheese with a hole in it.

Most people do this with a task performed every day – driving a car. During the learning process, rules are followed to a T. Turn signals are used, speed is kept within limits and complete stops are performed at red lights and stop signs. Within a short amount of time, all three of these rules are bent or broken. Turn signals are only used “when required,” speed limits are viewed as minimums, and rolling stops become the norm. We have reintroduced risk back into this task that the rules are meant to manage.

But remember, this reintroduction of risk is unintentional. If we saw the risk, we wouldn’t cut corners. If we knew that a car was in our blind spot, we would use the tools available at our disposal to prevent a collision. But since our perception of risk has become lower than it actually is, we inadvertently cause an error-likely situation. Eventually, unless measures are in place to reign drift back in, our mental model and perception of risk will result in an event.

So how do we rein in the expansion of drift?

First, we need robust processes for our critical tasks. This isn’t to say that we need a checklist for everything, but we do need processes in place to ensure our employees are made aware of when they need to either focus more or take a step back. Tools to assist here will be introduced later.

Secondly, we need oversight from front-line supervisors and managers. You can’t coach from your office. Be visible where your employees are located. When deviations from standards are seen, use that as a coaching moment. Stay positive and suppress the surge to penalize, especially for small things. Obviously, safety or major reliability concerns need to be evaluated differently.

And remember, Human Performance isn’t a big stick, rather, it’s a tool used to make our organizations safer and more reliable.

For more information on barriers or to see how we can help, please contact us.

Mental Models

Recently, I went on a double date to an Escape Room-type establishment…

Recently, I went on a double date to an Escape Room-type establishment.  We had a good time, but ended up not escaping.  The last thing we had to do was to disarm a bomb by entering a code.  There were no constraints other than that.  We had the numbers, we tried every combination of those numbers, but none of them worked.

After our debrief, I conducted an after action review of what happened (because that’s just the kind of thing I like to do), and it got me thinking about my mental models of the task and my own situational awareness.

I had two sets of four numbers – I was arranging them, and someone else was entering them into the number pad.  Each set of four numbers were arranged based on a method I probably shouldn’t discuss, but let’s just say that it makes sense.  Each of the four numbers were arranged via the same method.  Two other people were watching the person entering the codes (peer check), but no one was watching me arrange the code.

These sets of numbers fit nicely into my mental model that codes are sets of three or four numbers.  Every combination that we entered up to that point fit exactly this mental model.  Not to mention that every real-life combination lock or keypad I have ever used fit this model as well.  They led us down this path, and we developed a confirmation bias.  No one told us that the codes were all like this.  But it was the demonstrated model, and we fell right into the trap.

Needless to say, the final code was eight digits.

Luckily for me, this was just a game.  However, for employees in the field or workers back in the office, what are the possible outcomes for falling into the trap of a false mental model – Latent errors, an inadvertent operation, or an injury?  The consequences of a wrong mental model was minimal for us.  For others, it may be substantially higher.  Bad mental models have a way of luring people into under-estimating their actual level of risk.

What Human Performance tools might have helped us?  For one, having someone peer check me could be more beneficial than watching someone enter numbers into a keypad.  Since we had four people, we could have divided up any way other than the way we did it and possibly seen improvement.  I could have taken a step back and tried to get myself to re-evaluate my model, but we only had a minute or two, so I succumbed to a time pressure (remember error precursors).  If I cannot put myself in a place to correctly use my Human Performance toolbox in a situation like this, how can we expect our employees to use theirs – especially when their mental model is wrong?

The trick is to build checks and balances into our work so we can routinely evaluate the situation.  Stay tuned for more on this topic.

For more information on mental models or to see how we can help, please contact us.