With the levels of metrics and KPIs most companies have today, which sometimes almost boggle my mind, how is it that supply chains ever really get out of whack? I have some ideas you may find interesting.
A few years ago, "Supply Chain Performance Management" was one of our top 10 supply chain megatrends, and with good reason. However, performance management it isn't going to make our new Megatrends list (coming soon) because the practice of robust performance management it now seems to me is almost ubiquitous today.
In fact, I often hear various pundits say the problem in many organizations is that there are too many metrics. Meaning, by trying to measure everything, you lose sight of those few that really matter.
I agree there are probably cases where there "too many metrics" syndrome is true, but I am not so sure that having detailed, interlinked metrics up and down the supply chain isn't a good thing. Below are just a few impressive examples of what companies are doing with supply chain metrics that I think illustrated where a growing number of companies are today:
A year or so ago I saw a presentation from a Sara Lee supply chain manager that was nominally on its Sales & Operations Planning process - which seemed excellent enough - but what blew me away a lot more was the goal and measurement process behind it. It was really more of an "integrated business planning" type approach versus traditional S&OP, and all the key financial and operational goals were cascaded down multiple levels, with direct linkages back to the top - "line of site metrics," as it is called. Truly integrated performance management, assuming they walked the talk.
Spoke with a supply chain executive at network equipment giant CISCO around the same time, and it is just amazing what all it measures across its supply chain. It would take several columns to do close to justice to how it has instrumented its network, but to cite a quick example, CISCO has developed a number of strategies and tactics to enhance it supply chain flexibility (e.g., a "range forecasting and planning" process by SBU, product family, and individual SKU), and it then creates detailed reporting as to how well those strategies are working - and being used.
At the sort of other end of the spectrum, but illustrating the same basic theme, Procter & Gamble closely tracks what it calls something like "Cases not shipped because of warehouse." That refers to situations when cases are shorted on an order/shipment that were actually in the DC. When that happens, lots of alarm bells start going off, and the warehouse manager gets real nervous. If it happens more than a few times, he or she can expect a visit from the experts in Cincinnati.
I hear these sort of metric stories over and over again now - far different than even five years ago. Virtually every supply chain related case study now includes a detailed "what were the metrics" component. Part of it all this also comes from increasingly capable and integrated IT systems.
So my point is: how, with this level of reporting, can companies possibly go off the supply chain rails? How can there continue to be such wide gaps between the performance leaders, the average, and the laggards, as most recent studies have found there to be?
I mean, I look at what Sara Lee is doing, and I just don't see how it could get very far off course for very long. It seems to be measuring everything, directly tied to its overall goals and objectives. I suppose we could propose that it and other companies don't take those metrics and goals seriously, letting failure to hit the goals continue to just slide, but I just can't believe that is true in the vast majority of cases today, as it might have been in the past.
So, I am left with two choices:
1. Too many companies are being rigorous around the wrong metrics. They are hitting their metrics, but they are the wrong measures.
2. The right metrics are largely in place. But the targets are set at the wrong levels.
My friend Jim Tompkins of Tompkins International has some interesting things to say about number 1, but I am going to hold that for a future column. But I will say his and my thoughts on what should be the right metrics to use is in the end closely relate to point number 2. Too many companies must simply be setting their targets too low.
Who, if anyone, talks to a company in this era that says it is consistently missing its supply chain targets? You last today how many quarters as VP of Supply Chain or a functional manager if that is the case? About two, from what I see. Three quarters if you are really lucky.
So, that would seem to say that companies are not missing their metrics by much or for long. Leaving the logical conclusion that the difference in supply chain performance between similar companies is that they have set different targets, some of them way to low.
I will acknowledge that some difference in targets and thus results could - and often should - be related to different value props, such as whether the company's main differentiation is based on product innovation, cost or service, using the famous and still used framework promoted in the book "The Discipline of Market Leaders" more than a decade ago, by Michael Treacy and Fred Wiersema.
If service is your thing, than obviously metrics like fill rates, cycle times, etc. would have more focus than cost-related metrics (yes, we could debate whether this holds up in practice).
More recently, the concept of supply chain segmentation has re-emerged, and one of the fundamental principles of this strategy is that the supply chain metrics that matter will be different for the different segments being served with differentiated supply chain strategies and service policies.
I must admit in examples such as Sara Lee, CISCO and others that I didn't hear any discussion of segmented metrics, or how their main overall value prop impacted metric targets.
So in general, I am sticking with my conclusion that likely the main driver of supply chain performance between companies is that the leaders set much higher expectations than the middle and the laggards. The driver of performance difference isn't the wrong metrics, or the inability to hit the targets, but that the targets are simply set too low.
After all, how do you really know what the targets should be? More on that, and Dr. Tompkins interesting framework, in a couple of weeks.
What are your thoughts on the relationship between metrics, reporting, and performance? Do you agree with Gilmore's conclusion that the key difference in performance between similar companies often may be that the targets set are very different? Let us know your thoughts at the Feedback button below.
|