Supply chain management disasters, especially of the technology sort, have clearly declined substantially in recent years - but certainly not gone away.
Case in point: the mess that occurred in recent months at athletic shoe and apparel retailer Finish Line.
In early January, the company announced disappointing results for its third quarter, with sales down 3.5%, and a loss of $21 million when far better financial performance was anticipated. The stock price fell 11% on the news.
Gilmore Says.... |
|
Let me just say somewhat repetitively that trying to address processes and training challenges on the DC floor in parallel with some level of WMS bugs or set-up issues is an absolute nightmare. |
|
What do you say? |
|
Click here to send us your comments |
|
|
|
|
The first line of the body of the company's Q3 earnings release read as follows: "'Our third quarter performance was severely impacted by a disruption in our supply chain following the implementation of our new warehouse and order management system,' said Glenn Lyon, Chairman and Chief Executive Officer of Finish Line."
Well that's not exactly beating around the bush.
The system went live just in September, and there were obviously major problems - more on that in a minute. Finish Line said it had trouble filling on-line orders and replenishing stores that cost it $32 million in lost sales, or roughly 8% of the company's revenue for the period.
Now as supply chain disruptions go this is on the mild side versus the worst of them, and I doubt it will make our soon to be updated list of the greatest supply chain disasters of all time. That said, it was headline news in the Wall Street Journal, an 11% drop in stock price is a big deal, and CEO Lyon announced his departure simultaneous with the Q3 earnings release.
A few weeks later, chief supply chain officer Dan Marous became another casualty of the debacle, and I assume some other lesser heads rolled as well.
What the hell happened, and more importantly, what can the rest of us learn from the debacle? It isn't often these days a major company leads its earnings report by calling out a supply chain software failure.
I have been doing some digging on this, just to get the basic facts, speaking to two people with inside knowledge of what transpired. The new system was at an existing Finish Line DC, which used an existing "tilt tray" sorter for order processing. That facility had for years been running an old WMS from what was once EXE Technologies, but some time back, seeing an opportunity for operational improvement and the need to support Omnichannel fulfillment, Finish Line selected a new software provider for WMS and Distributed Order Management (DOM).
In this mix beside the software provider was obviously Finish Line itself and an outside consultant. Because I do not think it is fair for either the software provider or the consultant to be overly tainted by this one problem project out of the many projects each works on every year I am not going to cite the names, but will note both parties are major, very well-known companies.
Of the two people I spoke with, one apportioned the "blame" this way: Finish Line: 50%; consultant: 40%; software provider: 10%.
The other apportioned it this way: Finish Line: 30%; consultant: 40%; software provider: 30%.
The interesting thing, however, is that while both give some blame to the software company, neither gave it the majority culpability.
Here is my take, focusing on "lessons learned" for the rest of us.
Let me say first that WMS is often darn hard to implement - and can be, if not done well, high risk. That's simply because if the system isn't working right, product isn't flowing out the door, and that means revenue isn't being realized. It's no surprise there were several WMS related disasters in our all-time greatest list, such as Adidas's "warehouse meltdown" and drug wholesaler Foxmeyer's distribution disaster that led to its bankruptcy, both in the 1990s
The story was the same here: the distribution system snafus led directly to lost sales - at least $32 million worth.
Let me further say that there is nothing quite like having major WMS troubles while simultaneously trying to ship major volumes. I have been there. It is easy to be simply overwhelmed, as I believe happened at Finish Line.
I wasn't there, so I don't know for sure, but from my two conversations melded with some experience in such matters, the contributing factors that led to the problems from each party are something like this:
Software provider: There were some glitches, especially in the joint processing between the DOM and WMS. Add in some delayed response in fully addressing the issues once the problems arose.
Consulting firm: Put simply, for the $250,000 or so it was billing monthly, too much high level knowledge, not enough "hard expertise" it getting a complex system up and running. That firm obviously is also no longer engaged on the project.
Finish Line: Obviously overestimated the challenge of the project, especially the level of change management in moving from the old system to the new. Not nearly enough training for associates and number of supervisors on the floor to help when ramping up the system. Too much trust in software vendor and consultant.
Let me just say somewhat repetitively that trying to address processes and training challenges on the DC floor in parallel with some level of WMS bugs or set-up issues is an absolute nightmare. Separating the process/training issues from the software problems becomes an almost impossible task.
The most important collective failure was simply this: the decision was made to turn the system on, and ramp up to full scale, when obviously the people, process and technology were not ready to do so, as is conclusively demonstrated by the results.
The reaction of most supply chain practitioners is probably that there simply wasn't enough testing, right? More testing would have uncovered the looming problems.
Yes and no. At one level, it is hard to argue that there was enough testing, given what transpired at after go live. But my friend and occasional colleague Mark Fralick, one of the most important persons in the history of WMS, actually and now running consulting form GetUsROI, has a different take.
He thinks it is a big mistake to view "testing" as a distinct phase - usually of course at the very end - of the project. The right approach, Fralick argues, is one he calls "validation," in which the system is in effect tested operationally and technically every step of the way, not just at the end of the implementation process. And that validation has to clearly demonstrate that all aspects of a process work - the goal is to "prove" the results are acceptable across all dimensions, and you don't move forward absent that proof.
When approached this way, final testing becomes something of a formality, rather than a last step for which there is never enough time or resources, and which may reveal some unpleasant truths just days before scheduled go live.
Some other lessons, it seems to me, for all of us from this saga:
Take WMS deployment challenges and revenue risks very seriously: That big picture risk can often get lost in the minutia of trying to get the system up and running. On-time and on-budget are important goals, but they can't become the primary drivers of the project. Operational success is the primary goal, and believe it or not that is sometimes forgotten in the "fog of war" that characterizes getting ready for a WMS go live.
Pick your consultants very carefully: Former consultant and now a fellow at the Drucker School of Management at Claremont University Dr. Chris Gopal introduced me to the concept of "hard versus soft consulting" a few years ago relative to global sourcing, and it applies in spades to WMS. Does you consulting partner have real expertise and experience in the nitty gritty work of making an integrated WMS and automation system work? And remember, this means the people that will be working on your specific project, not the firm's overall experience.
You can test scalability - but many companies don't: Fralick and others use tools to simulate how the system will perform under the real volume of orders expected in the go-live environment. But many projects don't do such sophisticated testing - and assume if the system is working on the floor for a few hundred orders in acceptance testing, it will equally work under the stress of tens of thousands of orders. Wrong - as most of the WMS disasters historically have proven, including the one at Finish Line.
There is more, but I am out of space Would love to hear your thoughts on this.
What's your take on the Finish Line distribution disaster? What lessons or takeaways relative to WMS and beyond do you see? Let us know your thoughts at the Feedback section below.
Your Comments/Feedback
|
David Schneider
President, David K. Schneider & Co. |
Posted on: Mar, 10 2016 |
|
Nice report.
Testing, Timing, Training and Expectations.
Testing is not an event, it is a continuous part of the process. Test often. You can’t test too often. Test every day, perhaps every hour. You didn’t do a test in the past day? Shame on you.
When do you pull the trigger for a WMS upgrade? In January, after you have pressed all of the inventory out into the stores for the holidays. I would rather put a gun to my head than launch after July.
Training is not an event, it is a way of life. Train every day. Train so they can do it in their sleep. Train to where they can do it in their sleep and with their eyes closed. Don’t. Stop. Training.
Expect everything to go wrong, and have a plan to deal with it. Consider Murphy to be an optimist, and that everything will go wrong. If you assume that everything will FUBAR, then you can come up with the plans to deal with it when only 10% of what you expected to go wrong appears. Even when you get caught with the 100% that you did not expect going wrong, one of your plans for what you expected may just be the right answer.
|
|
Jim Hoover
Business Analyst, Supply Chain , Steinmart |
Posted on: Mar, 10 2016 |
|
THANKS for sharing!! |
|
Brent Ruth
Plan to Produce, IM/WM Team Lead , Caterpillar |
Posted on: Mar, 10 2016 |
|
I personally have been involved in many WMS implementations and the key factors have been:
- Always #1 is the business engagement. They must be fully committed (ham and eggs analogy) and not "wake me up when its over."
- Having internal COE expertise who can translate the business requirements into "consultant speak."
- Having internal COE expertise who understand the capabilities of the new system and can translate that back to the impacted business to drive point #1.
Not having #1 means easily tripling the costs and doubling the time - heads will roll.
Not having #2 means wasting resources (time and money) and not getting full value out of the transformation.
Not having #3 means you are in real danger of not capitalizing on the full capabilities and efficiencies of the new system jeopardizing ROI. |
|
Mike Challman
VP, North American Operations, CLX Logistics, LLC |
Posted on: Mar, 10 2016 |
|
Great piece - a scary cautionary tale. The lessons that are described apply equally to the implementation of a Transportation Management System (TMS).
One aspect of the disaster that isn't mentioned is the apparent lack of a contingency plan for quickly and safely returning to 'prior state' when it became evident that the new WMS was failing. Not always an easy thing to do, but when the go-live plan hinges on "failure is not an option" it can force the project team to continue pressing a bad position. Better to have a plan for bailing out (even if doing so still creates a bit of a disruption) and then getting reset.
I especially like the observation that testing is more than just a final checkpoint at the end of the project. Continuous and careful validation of the project on an ongoing basis throughout its life cycle, is the right strategy. And the three highlighted lessons at the end of the article are exactly on point, particularly the importance of stress testing.
Thanks for the thought-provoking (if also nightmare-inducing) article. |
|
Mike Albert
NA, NA |
Posted on: Mar, 10 2016 |
|
Just a couple of comments about the Finish Line article.
1. It doesn’t sound like Finish Line conducted proper milestone reviews with a “go / no-go” mentality. Making sure software implementations are complete and workable as modules are produced is essential. I agree with your mention of stress testing of individual modules along the way and for end-to-end completion.
2. While no one wants to plan for failure, Finish Line should have had an option to revert to the version of the WMS being replaced. It worked before and keeping the version ”alive” and working in parallel to the new version would have allowed for a smooth transition back to “known territory” and would have stopped the “bleeding”.
3. The project management process simply must have the attention of C-level management with candid progress along the way. Responsible CEO’s would value this process and accept the responsibility for ensuring everything was ready to go.
I saw that the Supply Chain guy was dismissed but nothing about the CIO who should have had an equal role in the success/failure of the project.
Lastly, companies that "go live" just before known surges in the business usually have failures. The pressure to go forward is high because the busy season “needs” the new system and with volume surge immediately following the implementation makes it virtually impossible to keep abreast of issues that need to be resolved. For example, setting the "Go Live" for 3-6 months prior to the Q4 push allows for issue resolution when volumes are low and more manageable. |
|
Nick Seiersen
NA, NA |
Posted on: Mar, 10 2016 |
|
As always, you have a knack for finding the sensitive point – like an acupressure point.
System go-live testing. Damned while you do it, damned if you don’t do it.
You rightly point out that a really robust test plan can really save your bacon before you “flip the switch.” The embarrassment of a delay is so much better than the debacle you describe above.
The systems implementation veterans have learned these lessons the hard way, and they will make sure the test plan is rigorous and complete. It would seem that none of the parties involved had one of them in the implementation team.
I learned this the hard way at a client, and now every new client will benefit from the experience of my very near miss. In my case, I had a veteran looking over my shoulder, and he kept me out of trouble. God bless Ray Healy and may he bask in the gratitude of the many younger staff he helped develop and grow!
|
|
Rich Marshall
NA, NA |
Posted on: Mar, 10 2016 |
|
Finish Line most likely encountered the tendency to customize systems to accommodate familiar practices utilized within their operations. These practices are often considered “unique” to their business and are the result of legacy processes created to work around the shortfall in the capabilities of their obsolete systems. Folks working with these systems are comfortable with the processes and seek to re-create them in the new system rather than adopt what are considered standard or best practices in the industry on which the new WMS and DOM are based.
Strong operational leadership needs to be exercised during any WMS or DOM implementation to prevent or minimize customization by evaluating the current processes and making changes to fit accepted standard industry practices. Change is difficult for many who are wed to doing things the same way because that way is “unique” to the business. Leadership must educate the team on the need to change and persuade the majority to embrace it.
|
|
Tim Feemster
Managing Principal, Foremost Quality Logistics |
Posted on: Mar, 11 2016 |
|
A WMS install is very hard to pull off without any issues. The level of planning, training and parallel testing prior to full implementation is critical. Testing is not at the end. Testing goes in phases as for receiving, order management, inventory control, pick line replenishment, and pick/pack/ship. If you wait until the end to do all of this, you are potentially in trouble.
If you already know that receiving, order management, inventory control and pick line replenishment all work correctly and THEN you test pick/pack/ship, you increase your success potential since you can focus on a much smaller issue list than if you do it all at once where interactions can cloud the root causes. Interesting that the article and feedback did not mention project management.
A rigorous PM process should have brought many issues to light in real time, not way after the fact. I wonder how much of that was done? Many times, the focus on the "go live" date clouds the thinking in becoming way to optimistic about outcomes and having a consultant that may not have had hands on operational experiences and a long list of implementations (new hires) doing most of the work would have also been noted in the PM process.
|
|
Tom Ryan
President, TRI Consulting |
Posted on: Mar, 11 2016 |
|
I’ve been involved now in multiple expert witness situations – I call it “software implementation failure forensic pathology, why did this thing die and who killed it.” Interestingly, 80% of the time it is the customer’s fault and not the consultant nor the integrator nor the software vendor.
Some key points to consider
Never trust it all to someone else. This is your business you are risking and not theirs. You are the one with the vested interest, you must be the overall manager of the program. Beware what you delegate to outsiders.
Never do a startup when it runs the risk of colliding with a big, high volume, business success event, e.g. Christmas rush. Delay till after the big event if you have to – use the old system, it worked last year – get through the event and then re-test and execute your go live. Go-live is an event that occurs when you are ready, not because of a date on a calendar. What would you rather experience, missed go-live date with project cost overrun or making a go-live date and then going bankrupt (e.g. FoxMeyer).
Don’t do a big bang. Figure out how to slice the project up into smaller events. This may require temporary integrations or manual activities as transition steps, staff for them, plan for them. For a WMS, you can typically slice it into 1, get it into the building, 2, get it out of the building, 3, then the rest of the stuff. Get it in equates to integrations with the PO/Procurement system of the ERP, the inventory system of the ERP, receiving in the WMS, and Putaway. Get it out equates to integrations to the order management functions of the ERP, the inventory system of the ERP, order execution planning, wave planning, pick planning, staging, and shipping. The other stuff is cycle counting, task interleaving, more sophisticated replenishment, dock planning, yard planning, labor management, etc. – even these things can be done in stand-alone pieces.
Overstaff startup. No one is at normal efficiency yet, they are still getting used to the new system. They will be slower, less efficient. Through put will be done with the new system. Overstaff resources to execute the work, overstaff supervisory people to manage the work, and overstaff trainers to assist the workers and the managers. Learn for this and adjust the training to reflect what happened.
Software vendors can and should give you training materials on how to use their software. They can’t give you training material on why and when to use their system. It is your business process. Your training material takes theirs, wraps it in your processes, and then it is useable in your environment (another project cost to plan for, staff for, and build into your project plan). |
|
Anoymous
Title, Anoymous |
Posted on: Mar, 11 2016 |
|
I have been on the project as independent consultant for 4 months. I saw this coming long back in 2013. Go live was pushed 4 times and delayed for over 1.5 years. The basic problem is that associates are too protective and concerned about job security that, they wouldn't even give access to the system to help them to run effciently. System was basically controlled by set of people who has no idea about WMS and DOM they are implementing and consultants were lot frustrated working with them and obviously no one could stick for more than 4-5 months.
|
|
Tom Dadmun
Retired VP, Supply Chain, Adtran |
Posted on: Mar, 12 2016 |
|
As they say, the highway to success in Project Management is littered with failures due to underestimating the risk. Rule number one – Never, never turn on a new system during Q4 or your prime business quarter. Rule number two – Validate all processes at full scale. Sure there are many “rules” you could point too but these two are key. Add to his the need to benchmark the solution providers successes and failures.
One might say it is hard to find this out – it is not. Due diligence is an expertise that requires background checks on the solution provider’s customers and a review of the good, the bad and the ugly. And if they say they have no ugly they are not truthful. Not all implementations go well. Some due to the solution provider, some due to the customer being ill prepared to take on a major project. Lastly, with a project of this magnitude, trial runs and simulations of the full blown system should be presented to the CEO and staff before the system is green lighted!!
|
|
David K. Schneider
Presidents, David K. Schneider & Associates |
Posted on: Mar, 12 2016 |
|
Looking at the news from yesterday – same day you break this article, several analysts drop FINL to Sell or Hold. Some raised their ranking just last week, only to drop them again this week.
The moral of the story, Mess up your Supply Chain Systems and you mess up your Earnings, and your Market Value. Risk is Risk.
|
|
John Sidell
Managing Principal, SCApath LLC |
Posted on: Mar, 14 2016 |
|
Great article Dan! Have always appreciated the work and expertise of you and your staff at Supply Chain Digest.
Having been through a few hundred WMS deployments over the years, I've developed a list of "lessons learned" and many are reflected in your article on Finish Line. One that applies to this project is that WMS implementations are very granular, detail oriented implementations - remember "detail is your friend" on these projects. A successful WMS deployment also requires strong leardership to execute. These two factors play directly into the quote from Dr. Gopal regarding "hard vs. soft consulting".
Keep up the great work! |
|
Don Benson
Partner, Warehouse Coach |
Posted on: Jun, 20 2016 |
|
Everyone has a perspective for what was wrong with these projects, and probably all contributed and have some part of the truth about the contribution of others. The common elements seem to be for many of these failures are that:
1. They were led and managed by someone that had not been responsible for a similar project before;
2. Vendor and consultant were paid, so the perception of the project being completed was signaled with these 2 critical parites departing and that its time to move on;
3. The distribution industry does not seem to learn from lessons from business system (ERP) history, that also continues to have significant failures. Failures are not well defined or documented in industry purblications (perhaps a function of the source of revenue), and equally not well documented in academic research projects which tend to limit recommendations to the benefits of following the lead authors academic discipline;
4. We imagine, develop requirements (business and operations?), budget (revenue and cost?), adjust budget, select vendors, plan and implement a WMS as mechanical, linear projects with the expectation of nominal variation in stakeholder demand over time, because that is the way it is always done.
A Warehouse Management System is the most complex element in a distribution operation. There are many more projects that do not achieve their desired outcomes and never get reported.
|
|
|