The Methodology Behind the "100 Best Corporate Citizens List" 

How the process for recognizing transparency has become more transparent itself.

 

By Richard J. Crespin and Elizabeth Boudrie

 

The two of us are old hands at research and analysis, but it has only been during the past year that we have taken complete control over administration of the CR’s “100 Best Corporate Citizen’s List.” Now modern corporate responsibility is a new field and few if any organizations have been rating CR longer than CR Magazine. So when we got the call to take over the List, we sat down to get "smart.”

 

We read the CR Magazine methodology. We met repeatedly with experts—CR practitioners, academics, NGOs, investors, regulators, and more—including those on our very own Methodology Committee. We also made an effort to learn about other research methods and other ratings systems, not just in CR, but in other fields as well: Fortune’s “500 List” and “Best Places to Work List,” even as far afield as college football's Bowl Championship Series. And finally, we listened to our critics. We read all the public comments folks submitted, looked at the bloggers, and even held a "town hall" at the 2010 CRO Summit in Chicago.

 

From all that listening and reading, we distilled the following list of complaints—and remedies.

 

Complaint

Remedy

Why is that SOB on the list? 
Some industries or companies elicit strong feelings in people.  Some hate alcohol, tobacco, or firearms. Some can't stand genetically modified foods.  Others consider mining and oil companies inherently evil.  So why do some of those companies make the list?

Source: public comment.

Because, objectively, they belong on the list.
If you think an industry just shouldn't exist, that’s an issue to take it up with your Congressman. Insofar as industries do exist, though, we don’t make subjective moral judgments. We report objective data.

We take a "hate the sin love the sinner" attitude. We want those businesses that do exist to operate as transparently and responsibly as possible. The list rewards transparency and accountability. It punishes irresponsible behavior with its Red Card system, flagging even the most transparent companies if they did something irresponsible. In this way, we put data in the hands of the people best positioned to use it:  those who do business with, work at, live next to, and invest in these companies.

You hypocrites!  You advocate for transparency while keeping your methodology secret.  People feel we should reveal the "secret sauce".

Source:  Public comments, Stan Litow of IBM’s blog, SustainAbility’s Rate the Raters Report, and questions from Russell 1000 companies.

You're right. The methodology wasn't transparent enough. But it’s becoming ever more so. 
We took our own medicine and have now made every aspect of the methodology we could think of publicly available. We're also taking our own medicine on constant improvement. If there's something you want to know, just let us know. (See the public comment section at the bottom of this article.)

 

We didn't score well—now what? 
Companies want clear guidance to improve their ranking. What can they do?

Source: SustainAbility’s Rate the Raters Report, CRO Summit town hall meeting, and feedback from Russell 1000 companies.

We've made it easier than you think.
Now that the methodology and data elements are all on-line, any company can download the data elements and review their own disclosures themselves. Need help? We'll give you your data file for free. For a small fee, we'll benchmark you against any set of companies you want in the “100 Best.”

Footnote: Notwithstanding the above, the one thing that makes improvement hard to predict (not hard to do, but hard to predict how much an action will improve your score) is that this is a relative ranking.  How a given company's ranking rises or falls from year to year depends on how everyone else does.  

Transparency? That's it?  Simply asking companies to disclose their information sets too low a bar, according to some. They want the list to go beyond that and judge actual performance in CR. Is transparency really all there is to this ranking?

Source: Public comment, CRO Summit town hall meeting.
 

Maybe one day transparency will be too low a bar, but that day is not today.
Accountability is the first condition of citizenship, and right now companies still have a lot of work to do in making themselves more accountable. At least 30 companies last year, as published in CR's “Black List,” disclosed absolute zero—literally failing to disclose even one of the voluntary data elements.  Even the best companies have a ways to go.  None come close to a perfect score.

That said, we do use performance data. The financial, philanthropy, employee relations, governance, climate change, and environment categories—every category except human rights—have performance data in them. To be sure, it's a small percentage (18 of the 324 total data elements), but it's there.

I can be evil and still make your list if I just disclose it?  Because the list rewards disclosure, a company could do bad things transparently and score well. Right?

Source: Public comment.
 

Technically, true. Practically, no.
Yes, a company could disclose bad actions and score well. But in reality, people don't talk about things they're bad at. In practical terms, better performing companies disclose more because they have more to tout. Furthermore, we can—and do—use Yellow and Red cards to address potentially and demonstrably bad behavior. More on that below.
 

Yellow and Red Cards?  Who are you to judge? 
Giving a company a red or yellow card relies on subjective judgment, which undermines any claim the list has to total objectivity.

Source: Public comment
 

Sometimes you have to make the tough calls.
Yes, issuing a red or yellow card does require a subjective judgment. We instituted this process to account for data not picked up through other means, and we administer it with due care, fully recognizing its subjective nature while trying to make even these subjective calls as rules-based as possible. Here’s how it works…

We begin by having our research team at IW Financial gather news stories, court cases, and other information in the public domain related to the seven categories on all the companies in the preliminary “100 Best List.”  Our editorial committee then goes through each and every piece of evidence and weighs them. First we determine gravity. We consider an item grave if it a) impacted life, limb, or significant property, and/or b) involved behavior far outside accepted norms in that industry. After that, we distinguish final judgments from pending actions.  Final judgments in grave matters—guilty verdicts, admissions of guilt, consent decrees, etc., on substantive matters—earn a Red Card. Pending actions in grave matters—scheduled court cases, regulatory actions, administrative filings, etc.—can earn a Yellow Card.

We fully acknowledge this involves a degree of subjective judgment. We also admit that sometimes we make bad calls.  Example:  we Yellow-Carded ExxonMobil last year for an action that should have earned them a Red Card. We reversed that call this year. To counter-balance that subjectivity, we not only use the rules-based process outlined above, we also disclose all the data we considered, even if we didn’t end up issuing a red or yellow card. That way you can make your own calls.

I don't care about that issue. Why do you? 
Different issues excite different people. Some say we should get rid of some criteria.  Others think we should differently weight the categories.

Source: Public comment
 

We’re not the only ones who care about our categories.
What defines corporate responsibility remains an unsettled issue. Literally dozens of different definitions exist. This is ours, and we’re convinced it’s among the most balanced and comprehensive definitions out there. It’s also based on a consensus definition, drawing on well established sources, including the Global Reporting Initiative, the Carbon Disclosure Project, and the UN Global Compact, to name a few. 

We also rely on a Methodology Committee that includes academics, practitioners, NGOs, and ESG analysts. We’ve aggregated public comments from socially responsible investors, non-profits, regulators, and the general public. From that expert and crowd-sourced wisdom, we’ve come up with state-of-the-art criteria.

What do you mean my score dropped? I did more than I did last year!
When a company does the same or more as last year and still drops in the rankings, they get very frustrated.

Source: Feedback from Russell 1000 companies.
 

It’s a relative ranking.
By nature, a ranking is a zero-sum game. If someone rises, someone else drops. And each year the competition gets tougher. Everyone in the “100 Best” has been improving every year, which means you can improve your performance . . . and still drop in the ranking.

The other thing that confuses people: ties. In the Olympics, if two people tie for first, they get the gold.  The next person gets the bronze—not the silver. In a field as large as the Russell 1000, it’s not unusual for ties to include dozens of companies, e.g., 26 companies can tie for first making the next rank 27th.
 

My industry is so unusual.  How can you compare us with other industries? 
Some people feel the issues facing [industry x or y] are unusual enough that it’s impossible, or at least inappropriate, to compare across industries. 

Source: Stan Litow of IBM’s blog, SustainAbility’s Rate the Raters Report, public comment
 

The world needs some level of comparability.
We take this issue very seriously. So much so that we have begun releasing industry-specific lists. We started with a pilot program last year and will expand it this year. And we’re going in-depth in our benchmarking reports and other analysis to see what we can learn and offer to specific industries.

At the same time, just as you can compare the financial performance of any company with any other, the world needs to be able to compare companies across industries in corporate responsibility. This list serves that purpose. And, as we said earlier, the list relies purely on objective data without industry bias.  This is one of the few sources people can use to compare companies across industries.
 

It's too complicated.
There are all these different criteria, categories, and weightings. There’s a lot of data and some algebra.  It’s impossible to decipher!

Source: Public comment, SustainAbilty Rate the Raters Report, and Russell 1000 company feedback.
 

Actually, no, you're over-thinking it.
True, it's a lot of data (324 data elements in the 12th annual listing, to be exact), and there are different categories and weightings, and there is some algebra. That part might seem complicated. But what to do about it isn't.  It’s startlingly simple to turn this list into action: 
•    Download the data elements and scoring.
•    Compare them against your own disclosure.
•    Disclose those things you don’t.
 

So, that’s the list of things we’ve come up with so far. We’ve already started thinking about next year’s list and the list after that. We want your ideas, complaints, and criticisms so we can keep improving. If you haven’t already, please submit a comment online, and register for the Commit!Forum September 26-27 in New York City.  We’ll be holding another town hall meeting there to gather yet more feedback.

Finally, on a personal note, we want to take a moment to thank Joseph Wolfsberger of Eaton and Brian Ballou and Dan Heitger of Miami University of Ohio who chaired the Methodology Committee, along with Stan Litow and Reg Foster of IBM, Marcela Manubens of PVH, and Bob Pojasek of Harvard University—all of whom did yeoman’s work in advising us on this year’s methodology. And we could not do this without the valiant efforts of Mark Bateman and his team at IW Financial who gather the data for us.