Stacey Barr provides a powerful systemic method for developing measurements that lead to improvement in her book, “Practical Performance Measurement – Using the PuMP Blueprint for Fast, Easy, and Engaging KPIs.”1
More often than not, notes Barr, people use as performance measures initiatives, events or milestones of implementing a strategy or project, statistics that monitor actions being taken, surveys, and vague verbiage.2
Initiatives are not measures
Initiatives are not safety performance measures. Safety initiatives include: Regulatory Adherence; Establishing a Safety Task Force; Conducting Safety Contacts with Employees; Performing Safety Observations; etc. None convey performance or, as Barr points out, describe “results.”
Barr writes, “Monitoring milestones is project management; monitoring results is performance management.”3
Monitoring progress toward completing activities is not performance measurement.4 Safety activity measures that are not performance measures include: 100% of employees have been briefed on the corporate safety policy; on-time delivery of the department’s safety management system plan; etc. These measures don’t provide insight to “results,” only activity.
Sources of data — injury and illness stats, safety incident reports, near miss records — are not safety performance measures.5 When we collect this data we fail to ask ourselves the business question, “How am I going to use this?”
“Weasel” words are not measures
A few vague words — efficiency, effectiveness, key, emergent, core, impact — do not constitute a safety performance measure. Barr considers these types of words to be “weasel words, words that suck the meaning out of whatever is being stated…”6 Safety examples include statements such as “Strive to Improve the Effectiveness of Safety,” “Safety is Our Key Value,” and “Safety First.” All are impossible to measure.
Barr defines a performance measurement in the context of metrics that provide comparisons resulting in performance improvement. Isn’t this what we are seeking from our metrics? Specifically, her definition is:
A performance measure is a comparison that provides objective evidence of the degree to which a performance result is occurring over time.7
Barr presents her eight-step team-based PuMP Performance Measure Blueprint process. The steps are:
Step 1: Understanding Measurement Purpose
Step 2: Mapping Measureable Results
Step 3: Designing Meaningful Measures
Step 4: Building Buy-in to Measures
Step 5: Implementing Measures
Step 6: Reporting Performance Measures
Step 7: Integrating Signals from Measures
Step 8: Reaching Performance Targets
Barr offers these suggestions for undertaking the PuMP journey: Start small; run a pilot. Use the pilot to learn the PuMP process and adjust, as needed. Involve a small group of volunteers, between four and seven people. Set a timeframe of eight weeks and stick to it. Focus on a simple application of the PuMP process, and carry it out to the end. Look for value gained from the performance measures developed. At the end, reflect on the process; what worked and did not work. Write a case study on the performance improvements achieved — not just the measures themselves.8 Each of these eight steps is carried out one week at a time.
Week 1: Understanding measurement purpose
Get your Measures Team curious about why PuMP is the chosen approach. Assert this time performance measurement is going to be different.
Barr notes most of us go into new performance measure processes holding onto old beliefs and bad habits — using measures to judge people; using weasel words to write goals; asking people to simply sign-off versus being involved in the development; using voluminous performance reports to explain away not hitting targets; comparing this month’s measures to last month; and using educating, resourcing or funding as improvement initiatives. Barr points out illuminating bad habits and old beliefs starts the process of believing bad habits and beliefs can be changed and more meaningful performance measures can be developed.9
The first PuMP meeting sets the foundation for the entire PuMP process. Barr developed the PuMP Diagnostic, a questionnaire with a scale from Undesirable to Desirable. The questions include:
• How measureable is our strategy?
• How meaningful are our measures?
• How well are our performance measures
implemented?
• How useful and usable are our performance reports?
• How well are performance measures improving performance?
The questionnaire and subsequent conversations with team members introduce the new approach to developing performance measures.10
Week 2: Mapping measurable results
Here the team determines if the organization’s safety strategy is measureable. As Barr notes, most companies’ strategies are not measurable. Why? Weasel words. Action-oriented initiatives and projects. Multi-focused goals. Confusion with business as usual. A lack of rigor and logic.11 Safety strategies include: “Deliver state-of-the-art safety services;” “Distribute new fall protection procedures;” “Provide high-quality safety support;” “Sustain the current level of safety;” and “Each department reduces its lost-time injury rate to 0.5.”
Barr provides five steps in the week 2 session to convert an existing strategy into something easier to measure meaningfully — and consequently easier to communicate to the organization and execute.
The steps include: 1) Test the strategy’s measurability; 2) Tease out the implied performance results; 3) Map the relationships among the results; 4) Test the logic of the Results Map; and 5) Highlight priority results worth measuring.12
Ask these questions to test a strategy’s measurability: Are our safety goals results-oriented or are they actions? Are there any weasel words in the goals? Are the goals multi-focused? Are the goals something we should improve, can improve, and actually will improve? How does each goal align with the overall strategy?
Clearly articulate observable or detectable results implied by the goal. Ask yourself, “What is the actual result you want to achieve with the goal?”
Next, map out the relationships between a safety result and the corporate strategy and its goals. Barr uses her Results Map© to illustrate these relationships.13 The map consists of a circular target with four levels within the target. Results are positioned within each level in a circle, or as Barr writes, a “bubble.”
The inner circle contains the corporation’s results describing its mission and vision (i.e., purpose), typically with a 10- to 20-year time horizon. The next circle contains current strategic goals, describing needed improvements to meet the corporation’s purpose with a two- to five-year window. The third circle from the center contains the most important goals for business processes -— manufacturing process, maintenance process, engineering process, etc., with a one- to two-year timeframe. Finally, the outer circle contains results specific to parts of the business processes or units, with a timeframe of a few months to one year.
Each level’s results are critical to achieve the results within the next inner level’s results. The next step: identify the relationships among the results and connect them according to their Cause-Effect, Companion, or Conflict relationships.
Once relationships are identified, the team removes duplication among results, clarifies ambiguity, corrects relationship links, fills important gaps, and locates results at the strategic, tactical, and operational levels. Results worth measuring are highlighted for further work.14
The PuMP Results Map© allows employees to understand all the results of the organization and why outcomes at different levels of the organization matter to the overall success of the organization.
Next month I cover Steps 3 through 8.
1 Barr, S. 2014. Practical Performance Measurement – Using the PuMP Blueprint for Fast, Easy, and Engaging KPIs. The PuMP Press. Samford Qld, Australia.
2 Ibid. pp. 5.
3 Ibid. pp. 7.
4 Ibid. pp. 8.
5 Ibid. pp. 9-10.
6 Ibid. pp. 10.
7 Ibid. pp. 15.
8 Ibid. pp. 57-58.
9 Ibid. pp. 79-87.
10 Ibid. pp. 88-94.
11 Ibid. pp. 96-115.
12 Ibid. pp. 116.
13 Ibid. pp. 123-130.
14 Ibid. pp. 134.