In the past two issues (March and April), I covered the first five steps in the PuMP process: 1) Understanding Measurement Results; 2) Mapping Measureable Results; 3) Designing Meaningful Measures; 4) Building Buy-in to Measures; and 5) Implementing Measures. This month I provide the final three steps in the PuMP process.1
Step 6: Reporting Performance Measures
It’s fascinating how a safety professional will use every graphical presentation one can use in Microsoft® Excel. But they usually tell you absolutely nothing and, worse yet, there isn’t an executive in the room who can make a decision based on the safety data presented. Meanwhile, management asks why their safety professionals are not in the field doing safety stuff. Sound familiar?
Stacey Barr notes that almost all performance reports and dashboards “suck” and here are the reasons why.2
• Most are thrown together in an ad hoc way, with little to no thought toward the structure, resulting in difficulty in navigating the information.
• Most are cluttered and cumbersome with entirely too much detail.
• Most display information in indigestible tables, silly graphs, and dials and gauges that aim to entertain but result in dangerous misinterpretations.
• Most lay out information in a messy and unprofessional fashion, forcing readers to abandon the message of the data.
• Most are devoted to sourcing, cleaning and summarizing data — rather than analyzing and presenting insights.
To overcome this pain and agony, Barr offers elements for “designing reports and dashboards that are compelling, easy to navigate, and profoundly useful in decision-making to improve organizational performance and strategy execution.”3 Barr endorses (I would wholeheartedly agree) the expert advice of Stephen Few, whose website (www.perceptualedge.com) and collection of books provide excellent insights into presenting data.4
Barr’s design elements5 include:
1. Structure to Strategy – Design the performance report and dashboard to align with the organization’s strategy, using headings for each key result area.
2. Answer what, why, and now what? – In other words, what is our performance actually doing compared what we expect it to do; why is it doing that; and now what are we going to do about it?
3. Use graphs that signal – Each graph should only focus on one performance measure. Let the graph tell a story about that measure.
4. Design to engage – Use layouts that facilitate fast and easy navigation and use formatting to make interpretation fast and easy.
5. Automate, Automate, Automate – Every effort should be made to automate gathering, analyzing and collating the data. The more you can automate the system, the more time you can spend on analyzing the data for trends and signals that will drive action.
Step 7: Interpreting Signals from Measures
Barr says we need to stop reacting to data noise. Some of the most common flawed methods of looking for signals in performance measures:6
• Comparing this month’s measure value to the same month last year, or to a budget, a target or a year-to-date value.
• Comparing 12-month measured value patterns of the current year to the 12-month patterns to previous years, which are generally depicted in a stacked line or bar charts.
• Comparing the slope of a linear trend line through measure values with the horizontal (equivalent to no change) to draw a conclusion about the overall direction of change.
• Comparing a moving or rolling average line (that evens out the seasonal variation in measure values) with the horizontal to reveal the underlying direction of change.
Each of these performance analyses produces mixed signals because these methods are based on the following assumptions: 1) In “same month” comparisons, last year was normal; 2) If Excel can calculate a trend line for a set of data, then there must be a trend; 3) The world starts anew on the 1st of January every year; 4) The only probable cause for a difference is that something changed; and 5) All changes happen smoothly – at least for the data we choose to put moving averages through.7
Barr notes the common thread among these five assumptions is a lack of sorting out “Variations.” Variation is the up and down movement from month to month, which typically reveals routine variation or random noise. Variation also occurs when you have abnormal variation; change that has occurred outside of the routine variation. To find these variations, we need to employ statistical techniques using statistical thinking.
In filtering out the noise, Step 7 of the PuMP process utilizes the following steps: 1) Measure frequency enough; 2) Gather enough context; 3) Use a time-series graph; 4) Filter out noise; and 5) Look for signals.
To find the signals, Barr advocates the use of XmR charts, which are specific process control charts that fundamentally find the true signals in performance in our safety measures. The “X” represents our performance measure and the “mR” represents the moving range calculated from the differences of successive values in the performance measure time series. The ‘mR’ is how the amount of routine variation in our performance measures is calculated.8
Barr presents three basic signals to look for in XmR charts. First is the Outlier — when a single point falls outside the limits of natural variation (Note: this is the only time a single point constitutes a signal). Second is the Long Run — a small upward or downward shift, in which seven points in a row are on the same side of the central line. Third is the Short Run — a big upward or downward shift, where three out of four points in a row are all on the same side of the central line and are closer to one of the natural process limits than the central line.9
Step 8: Reaching Performance Targets
The eighth step involves closing in on the use of performance measure targets to actually improve performance. To accomplish this step, Barr provides the following actions: 1) Set sensible targets; 2) Prioritize the performance gaps; 3) Find the causes; 4) Choose high-leverage solutions; and 5) Look for signals and check for impacts.10
In setting sensible targets, you will find this controversial, but you need set targets that stretch the organization and workers believe are reasonable to achieve. I know you want to set a zero-injuries target; in fact, you feel ethically compelled to do so. Set your target to encourage continuous improvement versus setting an unrealistic target that leaves employees un-empowered.
Closing the gaps between actual performance and levels we are looking for needs to be prioritized as opposed to running after every gap discovered.
Seeing a signal in our performance measures should lead to determining what the causes of that signal might be, so appropriate actions can be taken to correct the discrepancy.
When choosing high-level solutions, avoid the temptation to conclude that the reason your organization’s performance is not meeting your expectations is something outside of your control. As Barr states, “deal with performance gaps by finding out what is holding those gaps open… focus on the ‘why’ rather than the ‘why not’.”
Last, but not least, pay close attention to the signals during and after the implementation of your solutions to address poor performance. You obviously want to know if the solution implemented has had a positive effect on performance. Be open to feedback from those implementing the solution and incorporate it into your performance review.
1 Barr, S. 2014. Practical Performance Measurement – Using the PuMP Blueprint for Fast, Easy, and Engaging KPIs. The PuMP Press. Samford Qld, Australia.
2 Ibid. pp. 242-249.
3 Ibid. pp. 250.
4 Few, S. 2013. Information Dashboard Design: Displaying data for at-a-glance monitoring, Second Edition. Analytics Press.
5 Op cit. pp. 251-283.
6 Ibid. pp. 274-279.
7 Ibid. pp. 280-283.
8 Ibid. pp. 289-284.
9 Ibid. pp. 299.
10 Ibid. pp. 317-334.