Skip to main content
Sep 30, 2008

Much ado about nothing

Governance ratings have become a ubiquitous part of modern business

There was a time when corporate governance conferences could barely fill a room. I know because I often sat in the back of those rooms covering the events. How things have changed. To run through the analogies of how prevalent and acknowledged the pursuit of better corporate governance has become would be just too patronizing.

Just as an industry of service providers has grown up around governance, so too have the prevalence of corporate governance ratings. In fact, these ratings or scores have become so ubiquitous that anyone can peruse Corporate Governance Quotient (CGQ) scores for various different companies on Yahoo Finance. Not quite a moon walk, but still pretty remarkable.

So it should surprise no one that a study questioning the relevance of governance ratings would arouse passions. It is aptly titled ‘Rating the ratings’ along with the more provocative sub-title, ‘How good are commercial governance ratings?’ and it was published in June by Stanford University Law School’s Rock Center for Corporate Governance.

The study analyzes the commercial corporate governance ratings issued in 2005 by Audit Integrity, RiskMetrics (previously Institutional Shareholder Services), GovernanceMetrics International (GMI) and the Corporate Library along with the ensuing fate of the rated companies through 2007. Authors David Larcker, Ian Gow and Robert Daines quickly and bluntly answered the question of how good the ratings are.

To paraphrase directly from the study’s published abstract, ‘Corporate governance rating firms provide indices to evaluate the effectiveness of a firm’s governance and claim to be able to predict future performance, risk and undesirable outcomes such as accounting restatements and shareholder litigation. ... Our results indicate that the level of predictive validity for these ratings are well below the threshold necessary to support the bold claims made for them by these commercial firms.’

In addition to finding virtually no statistical evidence linking the ratings and stock performance, the results showed almost no correlation between the scores and the likelihood that a company would issue restatements or become the target of a shareholder lawsuit. In fact, the study found companies rated highly by the governance rating firms did no better than poorly rated firms in avoiding problems like restatements or shareholder lawsuits.

‘There is nothing wrong with people shedding light on poorly governed firms,’ says Daines, co-director of the Rock Center. ‘Hopefully you’d be able to consistently spot such firms. You would think if these rating firms were able to do so, it would show up in the results of our study. What we found is that it is easy to spot the problems in hindsight, but it’s very difficult to predict them and find a potential Tyco.’

No invitation to participate


Needless to say, the rating firms we spoke to had plenty to say in response. Unanimously, three of the focus firms – GMI, RiskMetrics and the Corporate Library – questioned the data on which the study was based and pointed out they were never consulted.

‘It’s not clear where they got the data. They didn’t contact us,’ says Richard Bennett, CEO of the Corporate Library. Bennett suggests the study authors approached governance ratings as though they were credit ratings, which are paid for by issuers. ‘It’s a very simplistic view, and moreover, we’re not paid by the issuers. The people who pay for this information – investors – are the people who benefit from the ratings. It’s not the same on the credit side.’

Howard Sherman, CEO of GMI, describes the study as ‘weak’. His main concern, he says, is that the analysis is based on too short a period of time.

‘[The study] looked at end of 2005 to 2007. They did not use any data we supplied. They didn’t ask us for any data. It appears to me that they only looked at about one third of our total universe, and they only saw ratings we make available at the public level, not what we make available to our clients. It was a phenomenally limited snapshot,’ Sherman says.

‘It’s good data,’ responds Daines. ‘We looked at whether the ratings were able to predict important outcomes like firm performance or restatements and used a variety of measures and methods. Every now and then, one of the ratings would predict something small, but the correlation wasn’t robust or large enough to lead us to conclude that they were measuring a real governance effect.’

What’s the point?


So the study finds little or no correlation between governance ratings and performance. But was there supposed to be one? As much as the rating firms take issue with aspects of the study’s methodology and its conclusions, they also offer a response that might come as a surprise: Does it matter?

RiskMetrics’ governance business head, Rich Leggett, notes that there have been other studies with mixed results when measuring the correlation between governance on one side and performance, valuation and risk on the other: ‘The point is no one can say there is a correlation or not. There are many factors that come into play, including market cycles. Up until the tech bubble, for example, no one cared. After 2000, everyone cared.’

That might sound heretical coming from someone who’s behind the ratings. But Leggett says the study is looking for a correlation when no one in the business of scoring governance actually claims there is one.

‘The original intent [of CGQ] was to add a layer of insight into what was a lot of information we were providing, based on a client request to help them understand their corporate governance practices and compare corporate governance practices between companies,’ Leggett says. ‘In order to do that, you must create a scoring methodology or rating. It was not designed to be a predictive tool. A governance rating is an input into an investment process.’

Sherman does believe a correlation exists, but not the one the Stanford study analyzes. ‘It is the linkage between corporate governance and constant capital. The basic idea is that better governed companies generate more trust in the market and have lower costs of capital. If you look at that linkage and think about the capital assets pricing model used by investors, by definition, if corporate governance impacts cost of capital, it will impact the discount rate run through these models. Therefore there will be an impact on valuation. That is not what the Stanford paper tried to measure.’

Like Leggett, Sherman describes the ratings as an input to be considered, not a data point that predicts performance or litigation. ‘Most of our clients are investment managers. They use GMI to either support their mainstream research process and use corporate governance as one of many inputs, or they’re using GMI to support very specific products aimed at the ESG [environmental, social and governance] market. They wouldn’t use us if they weren’t making money. They’re not charitable organizations.’

Whatever the finer points of using ratings, the conclusion of the study is that ratings don’t work. ‘If the scores are not predictive, then I don’t know what use they are. The websites of these firms tout their ability to isolate companies well ahead of when they come into trouble. They should be able to predict for whatever they are set up to do,’ says Larcker, the James Irvin Miller professor of accounting at Stanford’s Graduate School of Business.

Despite the protestations of agency leaders about the application of specific ratings, there is little doubt that these firms loudly and publicly advocate the predictive ability of their governance ratings. On its own website GMI states: ‘Companies that emphasize corporate governance and transparency will, over time, generate superior returns and economic performance and lower cost of capital. The opposite is also true.’ And this is what Stanford and many public companies take issue with.

Corporate backlash


One question raised by the study that may have particular resonance, at least with the corporate audience, is whether or not directors and management should be concerned about their company’s score. It’s worth noting that the authors found ‘no relation between the governance ratings provided by RiskMetrics with either their voting recommendations or the actual votes by shareholders on proxy proposals.’

‘The fact is, one size does not fit all when it comes to these scores and corporate governance as a whole. But the premise behind these ratings is that one size does fit all. That’s the fundamental flaw,’ says Walt Gangl, former corporate secretary and deputy general counsel at Armstrong World Industries and a Society of Corporate Secretaries and Governance Professionals (SCSGP) board member. ‘I think the biggest problem with these ratings is the opportunity cost. I and so many of my colleagues spend way too much time trying to improve the scores. It’s not about how we do a better job on governance, it is about how we improve our ratings.’

Gangl acknowledges the benefits of better governance, but says efforts can be clouded by a preoccupation with ratings. He thinks boards spend too much time on minor governance issues at the expense of strategic oversight. ‘You get sidetracked with what’s our rating? rather than specific, relevant issues, looking at risk management within a company’s own enterprise risk management issues. Why look at separating CEO and chairman? Why look at term limit policy? This study should get boards to use the time they spend on self-evaluation more productively.’

Leggett is sympathetic to that argument: ‘It means we have to constantly evolve in what it is we’re assessing. The markets change, practices change. You can’t have a static model that doesn’t evolve with the market and we recognize that.’

As for the power of ratings in relation to RiskMetrics’ proxy advisory service – and the study’s conclusion that the scores don’t relate to those voting recommendations – Leggett stresses the fact that they aren’t meant to. ‘There is no factoring in of the scores at all. It’s a policy-based recommendation. Companies should use these ratings as a way to understand their governance practices and benchmark themselves. They should think of them as one indicator out of a series of other things. And they should talk to their shareholders, too. Our institutional clients have very strong opinions and they don’t always agree with us, and that’s a good thing. You need a good dialogue.’

Someone appreciates them


Perhaps the most powerful argument supporting the relevance of ratings is that so many investors use the scores. Even the study’s authors ask: Why would investors continue to pay for the ratings?

They offer four possible explanations: investors believe – apparently mistakenly – that ratings can help them produce higher returns or at least ‘avoid the next Enron’; institutional investors purchase the ratings to protect against potential future claims that they invested or voted unwisely; they use the ratings to obtain underlying data on takeover defenses, compensation and the like, which can be more costly to collect separately; or, finally, perhaps the ratings do help investors predict performance but the study didn’t employ the ‘right’ model.

Responding to raters who challenge the study’s model, the authors suggest that raters disclose their models and periodically disclose how well their ratings really do predict future outcomes.

‘These guys should cooperate with the Stanfords of the world who are trying to get behind this stuff,’ Gangl agrees. ‘Corporate governance is important. These firms sell this information, and if a study says it doesn’t help, they should participate to prove it wrong. This is an important issue, and they need to cooperate.’

Larcker says rating firms were in fact contacted for the study but refused to participate. The exception was GMI. He says it turned him down on a previous study, and he did not believe its response would have changed. While he acknowledges the firms do make some information available as to how they measure their ratings, it is not nearly enough.

The burden, he says, is on the rating firms to demonstrate in a ‘neutral and clinical way’ if their ratings are meaningful. Of course, what ‘meaningful’ actually means is perhaps a matter of argument. Are the scores simply an input to consider, or are they genuinely predictive?

One possibility, says Larcker, is future collaboration: ‘I’m on board with working with the firms to refine this. We’d be happy to work with them. Our view is not to punish anyone; it’s just to get the story straight. We don’t think our study is the be all and end all. I think it is one attempt. I view this as dialing up the debate, and I think collectively we can do something. To the extent you can do it constructively, the [SCSGP], academics, the raters. Maybe we all get together and get it right.’

Says Leggett in response to the notion of participating in future studies, ‘We try to embrace the academic community. We believe if smart people are trying to understand and promote corporate governance, that’s a good thing. Not only do we try to work collaboratively with academics, but we’re looking for ways to improve what we’re doing here, recognizing that the world changes very quickly and a model that doesn’t change as well is not very good.’

Stay tuned...

Ian Sax

In addition to living and breathing corporate governance, Ian Sax freelances for a number of publications and writes fiction and stage plays