Skip to main content
Nov 30, 2007

A measure of compliance success

Compliance programs must be effective but also achieve business goals

Managing operational risk and compliance in the current era of enforcement, shareholder suits and explosive class-action activity poses huge risks if you fail. But it also presents some game-changing opportunities if you choose to embrace them. Over the past few years, organizations have focused a lot of their time, energy and resources on designing, implementing and improving governance, risk and compliance programs to address these risks. Now, executives and board members are appropriately asking the essential self-reflective questions that will prove whether companies have achieved their aims: ‘Is all of this work really working? Are we delivering outcomes that really matter?’

While the art, science and practice of program evaluation are still in their infancy, there are several sound practices that organizations of all sizes can use to get answers to these questions. As we approach program evaluation, remember that managing governance, risk and compliance is fundamentally similar to – not fundamentally different from – other enterprise processes. Therefore, we can use tried-and-true techniques to evaluate our approach.

That said, what should we evaluate? What are the goals of the evaluation? How should we do it? What measures must we keep to provide the information for a meaningful evaluation? Are organizations doing an effective job of evaluation today?

What to evaluate?

Generally speaking, there are two types of evaluations to undertake: ‘effectiveness evaluation’ and ‘performance evaluation’. The former helps an organization meet minimum requirements and receive credit for putting in place a program that is logically designed by using sound practices. The latter helps an organization understand if the program is truly delivering business benefits and also identifies where investments can be optimized.

In the world of compliance and internal control, ‘effectiveness’ is a term of art that has specific meaning. Although legal compliance (including issues associated with preventing and detecting fraud) represents a subset of the issues typically included in operational risk, it is important that organizations use this common denominator when evaluating the a program, for it is this definition that will be used by enforcement entities when (not if) things go afoul.

It is important that we accept this definition, and not attempt to expand it. Doing so only invites regulatory uncertainty and confusion. And, most importantly, redefining ‘program effectiveness’ is unnecessary as most will find more value in using ‘program performance’ as a more powerful concept. 

Performance brings into view the totality of the program and determines if it is delivering real business value. This concept certainly includes ‘effectiveness’, as a solid program must meet the minimum legal requirements. However, as most executives know, performance helps an organization dig into the issues that matter most and answer, ‘Is our program delivering business value? Where should we focus our time and resources to make it better?’
 
Taking a step back

Take a step back and consider the goals of organizational performance. At the highest level, all organizations are in business to achieve objectives while staying within a set of specific conduct boundaries.

The governance, risk and compliance approach fits into this picture by providing a capability to identify boundaries and obstacles and establish a system to let management and the board know when it is getting close to (or crossing) a boundary. As issues are encountered and addressed, management can improve the program to reduce the likelihood that prior issues resurface, or new issues arise unexpectedly.

‘Effectiveness’ looks at whether the program is logically designed to address all mandated and voluntary requirements (design effectiveness), and whether the program is actually operating as designed (operating effectiveness). In this sense, the evaluation helps to determine if the program is delivering required legal and regulatory outcomes and appropriately reflecting the organization’s voluntary promises regarding its approach to governance, risk and compliance. This is the evaluation contemplated by the US Sentencing Guidelines and is a critical process to undertake.

Today, though, shareholders and other stakeholders are demanding more. At a practical level, neither design nor operating effectiveness will help management and the board judge performance or optimally allocate scarce capital. Beyond design and operating effectiveness, there is a need and demand for ‘total program performance.’

Yet, it is clear from preliminary research conducted by Open Compliance & Ethics Group (OCEG) that most entities have not yet mastered the effectiveness evaluation phase, and virtually none are undertaking the steps necessary to ensure high performance levels.

Total program performance

Total program performance looks not only at the effectiveness of the program, but also at its efficiency, responsiveness and the degree to which it delivers business outcomes that go beyond legal and regulatory requirements, as these are the outcomes that really matter to stakeholders. The above dimensions are similar to the classic performance triangle of quality, cost and speed.

There are numerous benefits and challenges to measuring the performance of a program. A well-known maxim is, ‘what gets measured gets done ...  what gets rewarded gets repeated.’ And the governance, risk and compliance capability and approach are no different.
 
Measuring program performance

The measurement planning process defines the overall measurement strategy, approach, required resources and information. These activities are conducted on a periodic basis to ensure that what you are measuring remains salient to both the program and to its role in the organization.

Management and the board must define enterprise objectives and align appropriate program objectives. As with enterprise objectives, every program is unique and, thus, will pursue unique objectives. That said there are a few ‘universal program objectives’ that most organizations strive to attain.

These universal program outcomes and the indicators used to measure progress toward them will be discussed in a future article.

Once you understand the fundamentals of what the program is trying to accomplish and how it relates to enterprise performance, define indicators that will help evaluate performance that can be linked or correlated to the indicators and targets used to measure the business objectives.

Having defined the indicators that speak to the particular needs of your company, management should identify targets that the program intends to deliver. Prioritize the targets based upon their degree of alignment to the business objectives.

With indicators in place, management should establish mechanisms to collect the appropriate data and monitor performance. Be on guard for those who make numbers mean just about anything. Ensure that management uses reliable data sources, repeatable approaches and consistent aggregation/calculation methods that will afford year-over-year analysis.

The significance of an indicator lies in the ability to report period-over-period to show directional performance. This cannot be achieved unless the approach for gathering the information for the indicator is repeatable. Repeatability is a factor of how and how often the data will be gathered. If you intend to report an indicator monthly, then the approach must be geared to collecting the same data at that same frequency. In a dynamic business environment, identifying aggregation and calculation methods that can be applied across an enterprise presents a significant challenge. So, calculation methods have to be normalized in the same manner or in a manner consistent with the way in which business performance measures are normalized.

Measurement presents challenges

To effectively measure the program, you will have to overcome a number of challenges associated with performance measurement, including:

The unintended consequences: These can occur when inappropriate or ‘perverse’ incentives or measures are put in place. In one professional services firm, contract compliance was historically measured in the first quarter of each year. When the firm switched to continuous monitoring of contract compliance, it found that contracts closed in the first quarter were five times as likely to comply with standard terms and conditions than contracts in the other three quarters. Knowing that the first quarter was all that really mattered led many to focus only on the first quarter when it came to contract compliance.

Perception versus fact: Several program outcomes require measuring the perceptions of stakeholders, typically via surveys and ethology. These tools do not necessarily indicate fact. For instance, a survey may ask an employee if he or she has observed misconduct, and the employee may not have the appropriate knowledge to know if something is actually misconduct. Nonetheless, surveys do provide an adequate proxy for information. In some cases, the perception is the ‘fact’ that management is looking to measure. For example, if employees perceive there is some type of misconduct going on in the organization, the perception exists and must be addressed in some manner, even if the underlying assumption is incorrect.

Long-term results: In some cases, the outcome of a program may not be realized for many years, which can make it difficult to obtain measurement data. For example, it may take several years to actually see that the implementation of a certain initiative (for example, training program on fraud prevention) has helped to prevent, reduce or detect incidents of fraud. In some cases, this can be addressed by identifying meaningful output-oriented milestones that lead to achieving the long-term outcome goal (in this case, keeping track of training data that will help with the long-term goal of reducing fraud in the workplace).

To address this issue, a program should define the specific short and medium-term steps or milestones to accomplish the long-term goal. A road map can identify these interim goals, suggest how they will be measured and establish a schedule to assess their impact on the long-term goal. These steps must be meaningful to the program, measurable and linked to the desired outcome.

Prevention and deterrence: By definition, a key outcome of the program is the deterrence or prevention of negative events. It is very difficult to prove a negative. Deterrence measurement requires consideration of what would happen in the absence of the program. It is often difficult to isolate the impact of the individual program element on any behavior that may be affected by multiple other factors.

Where non-compliance does not threaten physical, environmental or other significant harm, a legitimate long-term target may fall short of 100 percent compliance. In these cases, short-term targets that demonstrate forward progress toward the acceptable long-range goal may make sense.

For areas where failure to prevent a negative outcome would be catastrophic (including programs to prevent life-threatening incidents), traditional outcome measurement might lead to an ‘all-or-nothing’ goal. As long as the negative outcome is prevented, the program might be considered successful, regardless of the costs incurred in prevention or any close calls experienced that could have led to a catastrophic failure. This can be a dangerous and costly practice.

More appropriately, proxy measures can be used to determine how well the deterrence process is functioning. These proxy measures should be closely tied to the outcome, and the program should be able to demonstrate, such as through the use of modeling and/or factor and correlation analysis, how the proxies tie to the eventual outcome. Because failure to prevent a negative outcome is catastrophic, it may be necessary to have a number of proxy measures to help ensure that sufficient safeguards are in place. Failure in one of the proxy measures would not lead, in itself, to catastrophic failure of the program as a whole; however, failure in any one of the safeguards would be indicative of the risk of an overall failure.

Multiple contributors: Often, several business processes and capabilities contribute to achieving the same goal. The contribution of any one program may be difficult to measure. One approach to this situation is to develop broad, yet measurable, outcome goals for the collection of programs, while also having program-specific performance goals.

One example of this is culture. Ideally, the program will help to develop an environment of trust, accountability and integrity. This, in turn will contribute to talent attraction, retention and satisfaction.

That said, it is difficult to prove that the program is the only contributor to those outcomes. Nevertheless, management should collaborate to better understand how the full array of processes and programs (human resource processes, evaluation processes, compliance and ethics processes, etc) work together to achieve desired outcomes — and, if appropriate, assign some of the value to the contribution of the program.

Inconsistent or incompatible information: Data may be inconsistent or incompatible across enterprises and apples are not always compared to apples. For instance, the methodology used to evaluate information privacy risks may be completely different than the methodology used for employment compliance. This is especially true when analyzing information from multiple organizations. Ending on a crucial component of good compliance: care should be given to normalizing data so that accurate analysis can be conducted.

Scott Mitchell

Scott Mitchell is chairman and CEO of Open Compliance and Ethics Group