The fashion for business metrics has led to many advances. But "balanced scorecards" have their faults. Can we improve on the system?
It seems incredible now, but it is only a few years since “metrics” was a new buzzword in business. In the career-memories of managers in their 40s, measurement was restricted to largely operational matters, such as outstanding debt or work in progress. The application of metrics as a strategic management tool is relatively new, part of what Andy Neely called, in 1999, “The Performance Management Revolution” (1), in which measurement systems gradually shifted from being about compliance to being a means of challenging strategic assumptions and enabling organisational learning. Most revolutions play out their implications long after the first bullets are fired, however, and this one is no exception. As we study and work with firms trying to manage their performance, we observe performance measurement entering a new phase in which looking forward is at least as important as looking back. This article explores those observations and what lessons they might carry for practising managers.
A brief history of business metrics
Business metrics have their origins, as every MBA student is taught, in the work of Frederick Taylor (2). His 1911 book introduced the idea of “scientific” measurement that eventually led to the work of Deming, the total quality movement and eventually benchmarking. At the same time, accounting techniques grew to accommodate the control problems presented by larger, geographically spread, conglomerates. These large organisations, a product mostly of 1920s and 1930s, were too large and complex for owners and managers to understand with traditional “hands on” methods. The complementary operational and management accounting techniques flowed into each other to form what was, by the 1970s, the typical management information system. Covering relatively simple inputs and outputs such as costs, sales and profit, they were characterised by a pyramid of measures, in the positivistic assumption that each higher measure (for example, overall sales) was explainable in terms of its component parts (for instance, sales by product, sales by region, and so on). Wise managers recognised the limitations in such simple assumptions and used these tools as a guide. Less wise managers confused managing the numbers with managing the business, and many examples exist of great- looking numbers preceding a precipitous collapse. By the 1980s, however, this widespread approach to measuring the business was coming under increasing criticism for its three most obvious failings. Firstly, it was largely inward-looking and provided little information about customer and competitors. Secondly, it focused managers on reducing variances from plan, rather than actually improving absolute performance. Finally, it was becoming clear that selective concentration on short-term measurements could create good-looking numbers while driving the business into the ground.
These failings, combined with increasingly competitive markets and the enabling effect of IT, led to the revolution. Neely claims to have identified no less than 3,615 new articles on performance measurement between 1994 and 1996, evidence of a transformation in both academia and management practice. In truth, much of this work added little to the effectiveness of management systems, being either restatements of old ideas or echoes of the criticisms already made, but from this ferment of ideas emerged the one that has had most impact on managers’ day-to-day work: the balanced scorecard (3). Kaplan and Norton’s ideas sought to answer the criticisms of inward looking, variance-obsessed, short-termism with balance, the concept that multiple, compensating measures made it hard to damage long-term performance by short term actions. In the words of Stephen Covey, the aim was to balance short-term production with long-term productive capacity.
Balanced scorecards were and can be an effective and positive response to the problems of traditional methods. Further, through ideas like Strategy Maps, Kaplan and Norton extended the idea into strategy implementation. They did this by shifting more of managers’ work from the unnoticed to the monitored, what Meyer, in his work on motivation and commitment (4) tellingly calls “discretionary” and “non-discretionary” work. It would be naïve to under-estimate the continuing contribution made by Kaplan and Norton, and in our work we try to build on rather than refute their ideas and processes. But revolutions, to quote Will Rogers, are like cocktails; the first merely sets you up for the next one. In this respect, the performance management revolution is no different from any other and, in our work with firms trying to make strategy happen, we observe the beginnings of the next transformation in how firms measure what they do.
As academics, we are in the privileged position of being able to ask managers what isn’t working and, more importantly, to get straight answers. When we ask this of companies about their current measurement processes, several practical issues arise.
Firstly, many balanced scorecard systems demonstrate the failing of the management information systems they replaced in that they are mostly built around internally oriented lag indicators. Not only do such systems maintain the problems of their predecessors, they are in practice worse. This is because their new labels give a false authority to the still-inadequate system. In addition, they work less well than the system they replaced because they now have to deal with more turbulent markets and more intense competition than the system they replaced. This backward and inward looking tendency of most scorecards is not, of course, what Kaplan and Norton intended. As described in our earlier work (5), this seems to be a “cultural hijacking” of their principles by older habits and processes that organisations, and especially finance directors, can trace back to legacy management information systems.
The second complaint about scorecards that emerges from our research is that the measures chosen do not fit the organisation. When we drill deeper, we find that many scorecards tend to be generic, in the sense that they use very similar measures for all businesses, irrespective of their goals, context or strategy. In such cases, there is nothing wrong with the measures per se, it is just that they don’t fit the situation. An example of this was a company in our work where measures were aligned around growth, despite the fact that the business was now a “cash cow”, intended to provide cash-flow for other less mature businesses in the owners’ portfolio. Cases like this remind us reminded us of EO Wilson, the famous socio- biologist, who, based on his work with ants, who said of communism: “Great theory, wrong species.” Again, this isn’t what the originators of the balance scorecard idea intended, but seems to be the unintended consequence of “mass production consultancy”, combined with firms’ instinct to seek the security blanket of “best practice”.
A third set of observations about the failings of the balanced scorecard relates to the use of “dashboards” that often have far too many dials for practical use. In such cases, the management time-cost of making and using the dashboard is added to the opportunity cost wasted when it doesn’t do its job. These costs together can far outweigh any benefits that arise from the dashboard. In our experience, these cumbersome dashboards can be blamed on “consensus” management cultures; they are often designed by committees who, valuing consensus above clarity, find it easy to add dials to the dashboard, but very hard to leave some off.
The history and failings of performance measurement systems are easy to criticise. What matters, of course, is that we use these criticisms to inform the design of a better approach and that is what our work at Cranfield and Open University Business Schools is working towards. In brief, the ideal systems should meet four criteria:
- Incorporate the idea of balanced indicators.
- Combined lag indicators with lead indicators.
- Be adapted to the particular needs of the business, reflecting its context, goals and strategy.
- Identify the key measures at any one time, so as not to overload the management team with data.
Coincidentally, a good analogy for a system with these qualities has emerged in recent years and many of us use it everyday: the in-car satellite navigation system which tells you where you are and what you need to do to get to your destination on time, allowing for all the major factors that affect your journey, in a succinct and clear manner. It is this idea of “satellite navigation for the business” that leading firms are developing and that we have been studying. Our work in this area suggests a three stage journey towards building a metrics system that meets the ideal criteria.
Designing a sat nav system
The first step in building an effective sat nav system for your business is to make a paradigm shift from looking back to looking forward. While this sounds like a platitude, its practical implication is to focus on where future growth and competitive advantage is coming from. The logic behind this is that the goal of a metrics system is not simply to measure how well your business is doing, nor even to extrapolate from current results. It is to accurately predict the probability that we will meet our goals and, if needs be, tell us what we need to do to increase that probability. This may seem an academic point, but in fact it derives from observation of effective companies. By mapping out the antecedents and causes of what they want to achieve and understanding which factors most heavily influence future success, firms gain a better appreciation of what they need to measure. In the cases we have observed, this usually results in a shift of focus from current business to new markets or customers, and from “hygiene” factors such as quality to “differentiators” such as customer satisfaction and brand image. It is this understanding of what future success depends on that forms the basis of a truly balanced, forward looking and manageable system.
The second step is to understand what this “forward looking” attitude implies for our choices of what we measure in practice. The almost universal goal of most commercial companies and not-for-profits is to create value. As we have outlined in earlier work (6), value creation requires four interlocking leadership activities. To be useful, our sat nav system has tell us how well we’re doing in each of these areas and, to meet our other criteria, do so in a way that is relevant to our firm’s situation. To do this, we have to be clear about what are the fundamental questions for our business. Table 1 gives an example, adapted from a real case. The important point, however, is that this list of fundamental questions will be unique to the situation faced by each business unit.
Table 1: Fundamental questions
|Value creating activity||Fundamental question||Example answer|
|Visioning||Where do we want to go?||To be global leader in our chosen market|
|Connecting||What do we need to do to get there?||To establish a strong presence in China|
|Aligning||How are we going to do this?||By creating a strong channel to market in China|
|Delivering||How well are we doing so far?||Key metrics indicating achievement of channel strength in China.|
Having focused our thinking onto the most important factors of future success and elucidated the key value-driving activity required by those factors, the third step in creating a business sat nav system is to identify the questions that lie beneath each of those fundamental questions. At this point, we are truly tailoring the system to the very specific needs of the business and its current context, so examples become very specific. For instance, in our Table 1 example, the pyramid beneath the delivering question included measures of finding joint venture partners, agreeing marketing processes, establishing distribution density, and so on. Our key research finding from this part of our work was that effective firms used the four stages of value creation and what lay beneath them as a causal map to identify what things led, at a detailed level, to effective value creation. It was those indicators of future value creation that firms paid attention to. To the reader, this focus on what matters may seem like a statement of the obvious, but it contrasted strongly with what we observed in many other firms. In these less effective cases, the choice of what to measure was largely driven by what their IT system was able to tell them. One case study referred to this approach as the “drunk under the streetlight” syndrome, because a drunk looks for his dropped keys where he can see, rather than where he dropped them.
Steps 1-3 achieved some of the ideal requirements of an effective system of management measures. It looked forward, not back, and it tailored the measures to the goals of the company, rather than creating a generic list of performance measures. Firms that reached this stage of development had all the information needed to manage strategy implementation. However, the full process of creating a pyramid of measures still leaves firms with a bewildering number of measures. The problem remains of how to focus managers’ attention onto the numbers that matter most. It was this we turned our attention to next, uncovering three important lessons.
First, it was clear that, at any one time, a firm may be focused on one of the four stages shown in Figure 2, or perhaps on making the transition from one to another. It is very rare for a firm to be concerned with all four at any one time. This means that performance measures need not, and should not, be balanced across all four activities, but be deliberately focused onto the part of the process currently critical to your firm. In other words, we noticed that the four pyramids of measures were unevenly developed, appropriately reflecting the current focus of the business unit. In practice, this reduces the number of different metrics a firm needs by paying less attention to activity that is already completed, or that will not be critical until sometime in the future.
Secondly, effective leadership teams learned to demarcate their measures. They identified those few – usually less than ten – items in the pyramids of measures that were the current key drivers of success. Those metrics were put on the dashboard. The others were not ignored but delegated to the appropriate line manager. This method contrasted sharply with less effective leadership teams, who insisted that every measure be brought to the attention of the top management team. As one of our exemplar firms said: “We learned that if you measure everything that moves, you end up measuring nothing that matters.”
Finally, firms regularly updated their dashboards to reflect the situation. For instance, a set of measures that were appropriate at market entry stage (when initial trial levels are key) is less useful as the market matures (when brand loyalty is more important). We found that effective management systems regularly reviewed what the key drivers of value creation were and measured those, while other firms lost sight of the vital measures in a blizzard of other data.
We’ve summarised the steps to creating a sat nav system for your business above. In short, our research observed the emergence of a new generation of marketing metrics that are an evolution from simple figures, balanced scorecard and dashboards. The metaphor of a sat nav system is a good one to explain the concepts behind this. Your car may have a diagnostic computer that can tell the engineer thousands of things about the history of the vehicle, but that is not what you need to get to your destination. A few measures, chosen to reflect what drives value creation, appropriate to your business and updated over time, are all that most managers can make sense of and, in practice, all that they need.
Keith Ward is visiting professor of strategic finance at the Cranfield School of Management.
Brian D Smith is a visiting research fellow at the Open University Business School and runs a specialist strategy consultancy (www. pragmedic.com). He invites comments and questions to firstname.lastname@example.org. This article is based on the authors’ forthcoming book.
- Neely A (1999) “The Performance Measurement Revolution: Why Now and What Next”, International Journal of Operations and Production Management 19(2):205-28.
- Taylor F (1911) The Principles of Scientific Management, New York: Harper and Row.
- Kaplan RS, Norton DP “The Balanced Scorecard – Measures that Drive Performance”, Harvard Business Review 70(1):71-9.
- Meyer JP, Srinivas ES, Lal JB, Topolnytsky L (2007) “Employee Commitment and Support for Organisational Change: Test of the Three-Component Model in Two Cultures”, Journal of Occupational and Organisational Psychology 80(2):185-211.
- Smith BD (2005) Making Marketing Happen, Oxford: Elsevier.
- Ward KR, Bowman C, Kakabadse A (2007) Extraordinary Performance from Ordinary People: Value Creating Corporate Leadership1st ed, Oxford: Elsevier.
Authors: Brian D. Smith, Keith Ward
Source: European Business Forum (EBF)