A case study on the impact of index methodology on corporate reputation management

Presentation to the 6th International Conference on Corporate Reputation, Identity and Competitiveness

May, 2002

Lelde McCoy, Deputy Chairman Porter Novelli Australia, and Noel Turnbull, Chairman Porter Novelli Australia and Adjunct Professor at RMIT University.

The background to this paper is the publication in 2000 of The Good Reputation Index in Australia, the first-ever reputation rankings system sponsored in Australia by a media group.

The methodology for The Good Reputation Index was quite distinctive so we thought that an analysis of it and its reception might provide insights into reputation measurement methodology and rankings systems in general.

It is relevant to mention that while the Reputation Quotient was also first published in Australia in 2000, it was in a lower profile way so we were able to look at The Good Reputation Index reception as a brand new phenomenon.

Today, we will talk briefly about:

  • The role and uses of corporate reputation ranking systems;
  • The structure of The Good Reputation Index itself;
  • The response to the Index;
  • The results of a survey we undertook with companies who were ranked in the Index; and
  • Conclusions about the Index and rankings generally.

The aim of the case study was to examine the impact of a reputation ranking system on reputation measurement and management in Australia’s leading companies – to look at whether the good reputation index was meaningful to the companies that were ranked in it.

While this case study is concerned with media sponsored research it is relevant to point out that there are also other reputation rankings which highlight special interests such as treatment of ethnic minorities, quality of workplace and rankings disseminated by social monitors, which rate companies on more narrow issues.

[table id=1 /]

The ratings in this table only represent a portion of those that are around, for instance, the Finnish and Danish rating systems aren’t included.

It’s interesting to note that in most cases different constituents judge companies in these rankings, and they are often the people most closely tied to the sponsoring media outlet. As an example, in The Financial Times’ most admired companies survey, companies are judged by business leaders because The Times’ readership is business leaders who value the opinion of other leaders. In Australia’s Good Reputation Index, companies are judged by a wide range of business and community groups because the sponsoring newspapers position themselves as the voice of the community.

Given this growth in rankings and the discussion that rankings have precipitated by both researchers and practitioners we thought it was worthwhile to ask the question “what role do corporate reputation rankings play and can they assist an organisation to improve its corporate reputation?”

In particular, we wanted our case study to test two hypotheses:

  • First, that methodology has a significant impact on the reception and credibility of a corporate reputation ranking system; and
  • Second, that corporate reputation rankings have a limited use as part of reputation management and triple bottom line accountability.

As an introduction to our case study we reviewed the literature on the weaknesses and benefits of ranking systems. The list of weaknesses was huge and has been the topic of wide discussion since 1994 when Fryxell and Wang published their research on the Fortune Magazine’s rankings.

First, there are limitations to a numerical value approach.  The question has been raised as to whether it is meaningful to evaluate an entire organisation with one score and distill complicated matters to a single sound bite that can be easily digested.

Are these surveys simply another manifestation of  the cult of celebrity which now seems to embrace CEOs and companies as well as pop stars and sports stars?  Is reputation really a “listable” thing like cars or schools?

There’s also the issue of whether there is statistical significance between ranked positions with their placement on a scale exaggerating these differences.

The proliferation of methodologies for rankings has raised the issue of whether their should be a global standard, and it is encouraging to see in the recent edition of the Corporate Reputation Review the progress being made across continents with the reputation quotient.  With so many rankings the criteria and methods used differ and are unstable with formulas often changing.

Another criticism has been that there is bias in the companies and respondents surveyed.  The surveys focus primarily on large public companies and surveyors often have no first hand experience of the company and base their judgements on limited information rather than extensive research.  The diffuse criteria used are also not systematically articulated.

Often opinions are not differentiated between different stakeholder groups and there is no distinction made between corporate image and corporate reputation.

All these weaknesses have led researchers to conclude that rankings are popularity contests rather than meaningful assessments of quality.  They primarily indicate financial appreciation and have limited value as a diagnostic tool for a business.  They also have a tendency to become self-fulfilling through the “halo” effect and are motivated by self interest.

Conversely, the strengths of ranking systems are:

  • A visible way of building awareness of the importance of reputation;
  • They encourage corporate transparency;
  • They provide a point of reference for creating a sustainable corporate story;
  • Also a platform for improving stakeholder expectations;
  • Give managers with levers to pull to avoid developing ambiguous reputations; and
  • They provide a benchmark over time.

The Good Reputation Index in Australia, the subject of our case study, needs to be considered within this content.

The Good Reputation Index is operated by a private company called Reputation Measurement and appears each year in two of our leading newspapers, The Age and Sydney Morning Herald.  It aims to rank Australia’s top 100 companies by revenue, according to external stakeholder views.

However, it does not use traditional market research methods, instead the Index organiser has invited 18 expert groups with a direct interest in the reputation categories to be involved in ranking the companies.  The groups do this for free and they are allowed to keep the data they collect.  For each area which is ranked there are two to five organisations involved in the ranking.  The groups vary from the Australian Council of Trade Unions, Greenpeace and Amnesty International to the Institute of Chartered Accountants and the Australian Shareholders Association, the latter judging financial performance.

The groups are each allowed to identify four criteria on which to judge the performance of Australia’s top 100 companies in the category that they are responsible for judging.

All use different methods of data collection.  Some survey the companies, others rely only on publicly available information.  The research groups assign a score for each company against their criteria.  The data collected is then submitted to Reputation Measurement which prepares a comparative scorecard for each individual company across each of the indicator areas.  From this, results are aggregated to calculate the final index.  The ranking process is now undertaken by a third party independent statistician.

The categories are fairly typical:

  • Management of employees;
  • Environmental performance;
  • Social impact;
  • Management, ethics and governance;
  • Financial performance; and
  • Market position.

It is relevant to point out that some of the expert groups who participated in the first survey withdrew in the second round, or the data that they collected was not included. For instance, one of the authors of this paper, Lelde McCoy, chaired the Public Relations Institute of Australia’s working group that was involved in collecting data from companies on the quality of their communications in reputation management and their attitudes and practices, but we ended up not submitting our findings due to complaints from some of our members about the index methodology and also differences of opinion with the survey founder about the scoring treatment of companies who wished not to participate.

Given this background we decided to undertake a survey to ascertain how companies regarded the Index and its value for their reputation measurement and management strategies.  We also reviewed ts media and corporate reception.

Respondents to our survey included senior corporate affairs and corporate communications managers who either participated, participated with reservations or did not participate.  It was a non-random survey representing 10 percent of Australia’s top 100 companies.

Our questionnaire examined:

  • Their reasons for participation or non participation;
  • The place of the Index in their reputation management and measurement strategies;
  • The reactions of the companies’ stakeholders to the findings and any actions taken by the companies as a result of the findings;
  • Other reputation measures used by the companies;
  • Their suggestions for enhancing the methodology; and
  • Any triple bottom line measurements used and their measurement.

In looking at the Good Reputation Index’s reception we reviewed:

  • The media coverage it received;
  • The response of a group which is arguably most representative of Australian corporate and public affairs managers; and
  • Formal responses to the Good Reputation Index by senior practitioners.

Media Reception

In general the media response to The Good Reputation Index was:

  1. Extensive coverage in the sponsoring media of the results;
  2. Brief news stories, in mainly business media or business media sections of print and electronic media, which focused on the “winners” and “losers” – a vivid illustration of the point about the cult of celebrity ; and
  3. Major feature articles in which the Index was the focus for broader discussion of either methodology or the legitimacy of interest groups making judgements on reputation.

Three articles could be included in this third category, they are:

  • An opinion piece in The IPA Review, a right wing think tank publication;
  • A feature by a respected senior journalist in a news magazine; and
  • An opinion piece in a mass circulation tabloid.

The IPA Review, an article by a former MP Gary Johns, described the Index as “opinion dressed up as fact” and focussed its criticism on:

  • A classic Friedmanite assertion that the purpose of companies is to meet their fiduciary and legal obligations and the Index’s measurement of reputation was not relevant to actual performance;
  • An accusation that some of the interest/advocacy groups involved in the rating allegedly had conflicts of interest because they undertook consulting work for companies they rated;
  • The assertion that some of the advocacy groups involved in rating were actually expressing their ideological viewpoint rather than assessing performance; and
  • The argument that more tangible and objective measurement criteria (eg consumer complaints, employee compensation, product claim fulfilment etc) would have been more appropriate.

The tabloid feature, by a populist journalist Andrew Bolt, saw the survey as a manifestation of the threat of political correctness from a “cultural elite”.  The article also made reference to consulting arrangements and adversarial relationships between assessors and assessed and summarise the exercise as a “new threat to our democracy”.

The third article, by the respected journalist Fred Brenchley, was based on extensive correspondence between him and the reputation management company plus also details of Australian Centre for Corporate Public Affairs meetings which discussed The Good Reputation Index.

Brenchley covered similar ground to the other two pieces, albeit in a more considered format, raising questions about:

  • The non-standard methodology, varying from one survey panel to another which permitted bias and differing scores;
  • Conflict of interest by assessing organisations;
  • The problem of assigning mean scores to non-participants;
  • The weighting of opinion versus objective measurement;
  • Changes in the assessing panel composition; and
  • The extent of changes and improvements from the 2000 survey to the 2001 survey.

Interestingly Brenchley was the only one of the three to emphasise that one of the reasons for the controversy was that reputations were “bankable assets”.

Corporate Reception

One of the factors which has made our analysis somewhat easier is that participants in the debate have been very willing to circulate their comments widely by email to all the other participants.

One of the key sources of the material for the debate were the notes from a meeting at the Australian Centre for Corporate Affairs.

The centre held a teleconference of 37 senior corporate practitioners to discuss The Good Reputation Index.

The summary of this discussion, according to Allen, was “the major companies were not against reputation indices per se, but wanted to see them properly based, fair and accountable and not heavily weighted to the views of issues activists”.

This meetings concerns about The Good Reputation Index were:

First, the sheer volume and scope of each survey was onerous, overlapping and requested information not reported elsewhere inside or outside companies.

Second, the methodology was non-standard leading to the potential for undue weighting of reviewers and industry specific bias.

Third, suggestions of conflicts of interest because assessors:

–      provided consultancy services to assessed; and

–      conducted advocacy campaigns against individual companies or whole industries

Fourth, an alleged lack of confidentiality and privacy relating to responses and use of the material for other purposes.

The Business Council of Australia , the peak organisation for the top 100 companies in Australia, also intervened in the debate with a letter from its President, John Schubert, to the newspaper group which published the GRI.

The letter made similar criticism to the centre although it added that “there are already signs that some companies are seeking to “manage to the biases of reviewers”.

We thought it would be fair to the Index founder if we also included in this case study a summary of her response to the initial criticism to the Index.  She widely circulated a letter in which she said that:

  • The use of opinion is legitimate because the rankings process is grounded in contemporary theory that corporate reputation is the point of convergence for a number of different viewpoints about an organisation;
  • That the level of information sought was reasonable as it provided an opportunity to better inform stakeholders;
  • That adversarial relations and consultancies were dealt with through transparency and an independent process for the Index assessment;
  • That assessors could have opinions about companies regardless of whether or not the companies chose to participate in the Index surveys;
  • That the allocation of mean scores where groups responded with “not  knowns” meant that companies were not adversely affected; and
  • That the Index was subject to review and enhancement to make it more effective.

The Survey

We believe there is some quite rich qualitative data in the responses received to the survey and these will be explored – with suggestions for further research when we publish our paper – but today in this paper we simply highlight some points which emerged.

The sample included three companies which participated without reservation;six which participated with reservations (one of which participated in 2000 but not in 2001); and, one which did not participate.

The respondent who participated in one year but not the next said that they had done so partly to see what impact non-participation had on their rating. The rating fell significantly although the respondent claimed not to have had any significant change in performance or activities.

This respondent also alleged that the Index organisers had contacted the company on the day of the survey publication offering consulting services to help improve the rating. This was another reason for non-participation in 2001.

Only one of the respondents had no reservations about the GRI and of those with reservations the most frequently cited were the methodology followed by the workload involved in responding and the ideological position of organisations doing the rating.

There was no clear overwhelming reason for participation with responses ranging from “felt obligated” to “useful for internal discipline.” As one respondent said: “damned if we do and damned if we don’t”.

In terms of the importance of the rating system to respondents’ reputation management activities two saw it as important while eight said it was of no importance.

Asked how importantly they rated GRI three said it was important with most responding either with comments suggesting its importance was due mainly to its publication vehicle or not important at all.

Another respondent said: “it has no status but we spend a bit of time on it. This is because it would have status if, for instance, we came last”.

Seven of the respondents said the GRI had no place in their reputation management measurement strategy and six said they had taken no action as a result of it. One said they had done a gap analysis, one said they had complained to the newspaper proprietor and one said they reviewed how they put their information together with a view to improving their rating.

The respondent who said they undertook a gap analysis said: “Where appropriate we subsequently examine means of closing the gap or of (our emphasis) improving our evidential material to better represent out performance”.

In terms of stakeholder reaction six respondents said they had had none at all with two stating that the results had prompted Board questions and one saying they had had positive shareholder, stakeholder and customer feedback.

When asked what other reputation measurements were used the respondents indicated that their own internally-devised survey systems were the most common followed by commercial corporate confidence indices and media analysis.

The main reason for choosing the measurement systems adopted were effectiveness and cost effectiveness.

With the question on improving the GRI methodology the most frequent response (six) was to suggest there should be a change the composition of the groups doing the assessment. One respondent thought the methodology was satisfactory, one thought it was improving and three suggested that the GRI methodology designers should “start again”.

With triple bottom line performance the findings were:

  • About a third of respondents had no specific triple bottom line performance measurement;
  • A third used a sustainability based approach; and
  • The remainder related measurement to their business plan objectives or to specific environmental performance measurement.

Seven of the respondents indicated that they, and industry in general, needed more work and reflection to develop the most appropriate triple bottom line measurement.

In summary the majority of respondents had reservations about The Good Reputation Index methodology, received no feedback from stakeholders and took no action  on the results.

Discussion

At this stage our conclusions are still tentative and will be finalised.  However, there are a number of points which can be made now:

First, we are dealing with the eternal problem of whether the assignation of numerical values to non-numerical quantities or qualities is fruitful.  This problem is particularly vexed when financial rewards are associated with the scale – whether it be bonuses or “bankable assets” such as reputation.

Second, it is clear that in the case of The Good Reputation Index the methodology did have a significant impact on the reception and credibility of the rating system.

Third, in modern economies stakeholder management necessarily must address the sheer complexity of relationships with shareholders.  These multi-dimensional relations will inevitably be reflected in stakeholder assessments of organisations. It is therefore, the methodology rather than the concept which is important here, even though a number of Australian corporate affairs managers appear to doubt the legitimacy of any assessment by stakeholders who are in opposition to their industries or companies.

Fourth, it is axiomatic that the more we can make any reputation management methodology standard, bias free and consistent in scoring, the more robust our results will be.  However, an agreed theoretical framework is obviously essential to such standardisation.

Sixth, however, there is a fundamental question about rating systems. Our research indicates that, in the case of GRI at least, the rating system was neither considered important to reputation management strategies nor the basis for subsequent reputation management activity.

This research has been replicated on a larger scale by Echo, the UK based communication research group.

In a survey of 170 CEOs in Europe, South Africa and the US, Echo profiled attitudes to rating systems.

While the CEOs were obviously concerned about the impact of rating results none of them considered them important to reputation management strategies or activities.

In other words, both Echo and our research found that companies care about them ranking but don’t believe the ranking system is a guide to action or strategy on reputation management.

These findings raise fundamental questions about the actual worth of rating systems.

Indeed, where respondents to our survey said action was taken it tended to be around improving presentation of responses to the GRI to enhance ratings rather than more fundamental approaches to corporate reputation.

We believe this finding raises two broader issues.

First, is the effort going in to rating systems actually enhancing corporate reputation management performance or is it actually about a form of popularity rating.

The second issue is illustrated by the experience of university administrators with so-called “good” university guides and indices.

In this case many administrators came to view the guides as popularity contests and are moving to provide more data to the public and stakeholders on which they can make their own judgements.

We believe that the next step in the area of corporate reputation could be to explore means by which this process could be further adopted in the corporate area.

Thus, as we are moving towards more consistent global presentation of financial information so there could be more consistent – and perhaps higher – standards of disclosure (such as the mandatory disclosure in Europe and Australia) on matters on which the public and stakeholders can make judgements about an organisation’s social responsibility and performance.

As we have trouble reconciling US and rest of the world accounting standards – and there is currently considerable dispute about the robustness of even international credit ranking systems – one may not be overly optimistic of rapid progress in this area.

Nevertheless, such an approach would empower much wider groups in the community to make informed judgements about corporate triple bottom line performance.

Finally, and most importantly, perhaps we need to ask ourselves some further questions about the relevance and value of rating systems.

If they become primarily a mediating agency or simply another contributor to the culture of celebrity and sound bites would it not be more “corporately responsible”, so to speak, to shift more of our energies into areas such as transparency, understanding the determinants of reputation and the empowerment of stakeholders to help them make their own judgements?