Measuring the Success of an EHS Audit Program
Aug 23rd, 2010 | By Lawrence B. Cahill | Category: AuditingEnvironmental, health and safety (EHS) audit program managers often ask, “How do I know if my program is working?” This is certainly a legitimate question, and it is often a “pass-down” of the same question that is asked of the program managers by senior management, including the board of directors. This article explores the possible metrics that could be used to determine success or failure. As with most things in life, the answer is not obvious and can be quite complex. Let’s take a look at six common metrics that are often touted as valid measuring sticks:
- A reduction in environmental releases and workplace injuries
- Improved compliance as defined by a reduction in fines and enforcement actions
- A reduction in the total number of audit findings
- A reduction in the number of high-risk audit findings
- A reduction in the number of repeat audit findings
- A high rate of on-time closure of audit action items
Analysis of these six metrics shows that none of them by itself provides a logically compelling performance measure. Of the group, the most compelling metric seems to be the last, a high rate of on-time closure of audit action items.
External Measures
The first two measures relate to how improved performance might be evaluated using external metrics: a reduction in incidents and a reduction in notable regulatory noncompliances.
1. A Reduction in Environmental Releases and Workplace Injuries
If only there were a direct correlation between the rigor of an audit program and a reduction in environmental releases and workplace injuries. Alas, although there may be a relationship, little evidence suggests a direct causality.
Certainly an audit program can contribute to improved performance, but history shows that audit programs, no matter how rigorous, are not an adequate substitute for establishing sound management systems and controls at sites. Audit programs are meant to be verification programs; that is, the objective is to periodically verify compliance with applicable rules and regulations and self-imposed corporate standards and controls. Once audit programs are used as a surrogate for sound on-site EH&S management systems, the site is doomed to fail. Accountability must reside with site management and the systems and controls that have been implemented on site; it can not be reliant on a checkup visit once every two to three years to “get back on track.”
For any number of reasons (e.g., independent audits occur once every two to three years and are typically only a snapshot evaluation of compliance), expecting a reduction in releases and workplace injuries as a result of periodic audits misplaces the emphasis of who in fact is responsible. Sadly, there are too many cases where the audit program manager has become the “fall guy” when an incident takes place (“Why didn’t the audit program identify the situation that resulted in the incident?”).
Bottom line evaluation of this metric: Poor
2. Improved Compliance as Defined by a Reduction in Fines and Enforcement Actions
This measure is often considered by management to be a good way to determine the value of an audit program. However, the measure is typically relevant only in the United States, where regulatory agency fines and enforcement actions are common. For example, the U.S. Environmental Protection Agency (USEPA) issued over $186 million in civil and criminal enforcement penalties in its last full fiscal year, and the U.S. Occupational Safety and Health Administration (OSHA) issued its largest fine ever of US$ 87 million in 2010. Most other countries’ regulatory agencies have a more cooperative approach with the regulated community; fines and enforcement actions are rare. So for multinational companies, this is not a very good measure.
Secondly, even in the United States, the enforcement posture of the federal government can change from one presidential administration to the next. For example, under the Obama administration, USEPA’s budget has increased by 35%. In fiscal year 2011, the budget for the USEPA’s Office of Enforcement and Compliance Assurance (OECA) alone is $618 million, the largest budget in OECA history and larger than OSHA’s total budget. OECA states one of its main enforcement goals going forward as follows:
“Aggressively go after pollution problems that make a difference in communities. Vigorous civil and criminal enforcement that targets the most serious water, air and chemical hazards…”
Similarly, on June 18, 2010, OSHA published and made effective its Severe Violator Enforcement Program (SVEP) Directive. OSHA announced that it was “implementing the program to focus on employers who continuously disregard their legal obligations to protect their workers.” As a result, the playing field has changed substantially over the past year and a half in the United States, and the number of fines and enforcement actions likely will rise, independent of the rigor of any company’s audit program.
Bottom line evaluation of this metric: Poor
Internal Measures
The following four measures relate to how improved performance might be evaluated using internal metrics: a reduction over time in the number of total, high priority, or repeat findings and a high rate of closure of audit action items.
1. A Reduction in the Total Number of Audit Findings
This metric is rather easy to calculate but even easier to dismiss as not meaningful, principally because “all findings are not created equal.” For example, let’s say that an audit team finds that a regulatory program does not exist at a site because the site erroneously believes that the program does not apply to them. One finding. A second team visits the site two years later and finds that much work has been undertaken and the program has been largely implemented. However, there are still four administrative requirements that are not being met completely. Four findings. It’s pretty clear that the one finding on the first audit far outstrips the importance of the four findings on the second audit. Hence, if one were using total number of findings as a measure, the results would be quite misleading.
Bottom line evaluation of this metric: Poor
2. A Reduction in the Number of High-Risk Audit Findings
Many companies rank individual audit findings by the level of risk posed to the organization. They might even use a scheme similar to the one provided below:
- SIGNIFICANT: HIGHEST PRIORITY ACTION REQUIRED: Situations that could result in substantial risk to the environment, the public, employees, stockholders, customers, the Company or its reputation, or in criminal or civil liability for knowing violations.
- MAJOR: PRIORITY ACTION REQUIRED: A situation that does not meet the criteria for Level I but is more than an isolated or occasional situation. The situation should not continue beyond the short term.
- MINOR: ACTION REQUIRED: Findings may be administrative in nature or involve an isolated or occasional situation.
Thus, one potential metric would be the trend in the number of high-risk findings. Over time one would expect the number of high-risk findings to decrease as sites are audited a second and third time. Unfortunately, even with the definitions provided above there is often a lack of consistency in applying the ratings scheme, leaving some to question the merits of using this metric. Some of the reasons for inconsistency include
- No matter how well vetted within the organization, the definitions leave room for interpretation.
- Some but not all auditors believe that no regulatory finding could possibly be minor.
- Some but not all auditors (and at times legal counsel) believe that all regulatory findings should be classified as significant.
In addition, many companies do not classify findings based on risk, believing that all findings are equally important. Obviously, for these companies, this metric is not appropriate.
Bottom line evaluation of this metric: Fair
3. A Reduction in the Number of Repeat Audit Findings
Many corporate audit programs are designed to capture and report on repeat findings on individual facility audits. A repeat finding can be defined as
- A finding that had been identified in the previous independent audit of the same topic (e.g., environmental, employee safety) for which a corrective action has not been completed, or
- A finding of a substantially similar nature to one that had been identified, and reportedly corrected, in the previous independent audit of the same topic.
These repeat findings are typically considered serious findings and justifiably receive significant management attention.
The problem with using repeat findings as a valid metric for measuring performance is that most companies have not gone to the trouble of defining what is and what is not a repeat finding. This results in varying interpretations by auditors. One auditor might say that any exceedances of wastewater discharge limits would be a repeat finding had this been identified on the previous audit. Another auditor might look a bit deeper and determine that because of product changes, the treatment plant might have had to be operated differently and that the current pH exceedances have a different root cause.
As a consequence, auditors really need to focus on the intent of the repeat findings classification before proceeding to label something as a “repeat.” The question actually is — Did a breakdown in a management system really cause this repeat failure, or was it simply an isolated case of a similar nature?
For example, on any given fire safety audit of a large manufacturing site, auditors, if they look long and hard enough, can almost always find an issue with inspections and maintenance of portable fire extinguishers. Should a missing inspection tag on one fire extinguisher out of a universe of several hundred constitute a repeat finding if another extinguisher was without a tag on the previous audit? Probably not, if the fire safety management system is found to be fundamentally sound. These situations should be thought of as recurring findings, not necessarily repeats. Similar situations can be found in other compliance areas where the universe of items to be audited is also quite large (e.g., Material Safety Data Sheets, hazardous waste manifests, wastewater discharges).
In sum, repeat findings should not be used as a performance metric unless everyone is working off the same playbook. Sites should not be punished by the repeat classification when the system is otherwise fully implemented and effective.
Bottom line evaluation of this metric: Good if the term “repeat finding” has been defined and a uniform understanding of the term is applied to the audit program.
4. A High Rate of On-Time Closure of Audit Action Items
Any audit results in a corrective action plan, usually developed by the management of the site that was audited. The plan includes: a description of the finding, the proposed corrective action, the person responsible for the action, and the target date for completion. Many companies formally track the closure of these action items and calculate the percent of those that are completed on time. It’s all about “Say what you do, and do what you say.” This metric can be very useful in determining the value of and commitment to an audit program. Its benefits are
- It’s a simple measurement.
- The responsible individuals are the ones defining the actions and setting the dates.
- It’s a true measure of management’s commitment to compliance.
- Using a percent closure metric normalizes performance among differing operations.
Even with this metric, there are challenges. Some of those observed in companies that use this as a metric include
- The numerator and denominator of the ratio (i.e., action items closed on time to total action items) should be clearly defined and reported consistently.
- Complete 100% on-time closure is unrealistic and should be challenged.
- Original timelines need a sanity check; there is a tendency to revise or extend dates when being tracked.
- Original target dates need to be fixed, unless an extension is reviewed and approved by a senior executive.
Let’s take a brief look at how this metric might be put into play. The following chart is an example of how action item closure might be presented for a company’s five strategic business units (SBUs) for the latest six-month period.
The goal is to have each SBU achieve a greater than 90% on-time closure rate, recognizing that the ultimate objective in a perfect world is 100%. What conclusions can we draw from the results? First, SBUs 1 and 3 are in good shape and are meeting the goal. Second, SBUs 2 and 4 are not meeting the goal, with SBU 2 achieving only a 60% on-time closure rate. This is clearly a red flag. Third, SBU 5 should be praised for a perfect 100% or scrutinized for its perfection, depending upon the relative cynicism of the reviewer.
Bottom line evaluation of this metric: Good to Excellent
About the Author
Lawrence B. Cahill, CPEA, is a Technical Director at Environmental Resources Management in Exton, Pennsylvania, U.S.A. He has over 30 years of professional EHS experience with industry and consulting. He is the editor and principal author of the widely used text, Environmental, Health and Safety Audits, published by Government Institutes, Inc. and now in its 8th Edition. (The ninth edition will be released in the Fall of 2010.) He contributed four chapters in the 1995 book Auditing for Environmental Quality Leadership, published by John Wiley & Sons, Inc. Mr. Cahill has published over 50 articles and has been quoted in numerous publications including the New York Times and the Wall Street Journal.
Image: Provided by Lawrence Cahill.
[…] L.B., “Measuring the Success of an EHS Audit Program,” EHS Journal On-Line, August 23, […]
[…] Measuring the Success of an EHS Audit Program (Cahill) […]
[…] Measuring the Success of an EHS Audit Program (Cahill) […]
[…] repeat findings in evaluating audit program performance, please see Lawrence Cahill’s article Measuring the Success of an EHS Audit Program.) A “repeat finding” classification must be defined for an EHS audit program so that everyone […]
[…] Measuring the Success of an EHS Audit Program […]
[…] Measuring the Success of an EHS Audit Program […]
[…] Measuring the Success of an EHS Audit Program […]
[…] Measuring the Success of an EHS Audit Program […]
[…] via Measuring the Success of an EHS Audit Program - EHS Journal. […]