The following was first posted on Harvard Business Review.
If you want to improve how your organization develops, delivers and supports its products or services, it’s hard to avoid changes to the information systems that enable those processes. Yet I often see organizations that try. Why? Because they have suffered from delays and high expenses in changing information technology (IT), some choose a manual workaround that is less efficient. Others believe they should first streamline a broken process and only then make changes to information systems, if still necessary.
IT can make business processes much more efficient, and lock in process improvements. Yet IT can also limit speed and flexibility. Companies like CSX, Morningstar and IBM show how IT can be more friend than foe to process changes.
To highlight these issues, let’s consider a large, multinational energy company which has spent billions of dollars in the last six years to implement global standard processes across more than 100 countries in its “downstream” business (refining and retail). The company has built its standard processes for making and fulfilling customer orders on standard information systems. These standard systems forced business units around the world to adhere to new policies and “best practices.” The new system also provides top management with operational performance data across countries, such as the percentage of “perfect orders” — those delivered on time, in the correct quantity, and with a correct invoice.
But this new IT platform could easily put the brakes on the company’s next round of process improvements. Executives must reconcile a range of global requirements from over 100 countries into a single software system, where before they previously had one for each country. Then, the IT organization needs to roll out releases and support a consistent global platform.
How can IT organizations be responsive when their companies need to improve business process to answer customer needs and seize market opportunities? And how can organizations do this over the long haul, not just for a moment in time?
I see three ways an IT organization can be highly responsive to managers who must make process changes:
1. Have a highly-collaborative working relationship with process improvement leaders, and adopt their techniques. At CSX, the $11 billion railroad, the Operations Process Excellence group has a collaborative relationship with the IT group. Assistant Vice President John Murphy told me that CSX’s process improvement leaders have found that by documenting processes and using “lean techniques” — the voice of the customer, demand management, just-in-time, level scheduling, small lot sizes, mixed model production, and cross-training — the IT organization can implement technology changes faster.
2. Prioritize IT resources to be able to make process changes quickly. Anu George, chief quality officer at Morningstar, a leading provider of independent investment research, explained this to me. “Technology is integral to our business. Our operations, quality, and technology teams work very closely with each other to drive process improvements that enhance the overall customer experience. For instance, we recently developed a customized interface to improve the quality of our investment criteria write-ups, which took about two to three months. Barring huge system overhauls or migrations, our IT developments happen fairly quickly.”
But in having adequate IT resources to make system changes quickly, Morningstar isn’t typical. Most IT organizations are overrun daily with excess demand for their services. And many are faced with repeated urgent demands that create a culture of firefighting and leave only 10% of their resources for process improvement initiatives.
To free up more resources for process improvement, IT organizations could apply improvement techniques internally, as CSX did. A large, diversified energy company reduced its non-discretionary IT costs from over 80% of its IT spending to 60% in three years by employing much more rigorous governance of minor projects, eliminating the “IT walk-up” window, and establishing consistent vendor service levels and better deals. The result: It unleashed many millions of dollars to support critical process improvement projects.
3. Build an IT infrastructure that makes changes easy and turns over control to line managers. The IT organization can design applications in ways they can be changed more easily. It can also give users web applications, data, and analytical tools so they can manage processes themselves. Susan Watson, vice president of enterprise process simplification at IBM, is responsible for driving best practices in business process management across this $107 billion business spanning 170 countries. “In my current job as process transformation leader, I’m focused on how IT technologies can enable process changes,” she told me recently. For example, IBM today uses software that lets business users make pricing changes in proposals in hours or days instead of the weeks it used to take when changes had to be done as an IT project. “We now have a set of pricing business rules, and business users have control to quickly change prices,” Watson said. The system was rolled it out to 32 countries in a few months. “New technologies are enabling us to give the business back to the business. That is key to our ability to transform.”
What ways have you seen IT groups become more responsive to business process changes?
Brad,
I’ve actually seen IT significantly improve organizational performance in both ways. What I’ll call 1.) process first, then IT to emulate the new process, and 2.) IT as the “big stick”.
I remember doing a benchmarking project 14 years ago to examine how organizations were implementing ERP systems. We talked to one company, I can’t remember the name that said it outright, “We didn’t do any process work before the implementation. We knew or processes and data definitions were horrible. We implemented the ‘best practices on the disk’.” They drove it from the top and forced every division/unit to adhere to the way the software required them to work and handed out the definitions that were needed to make sure there was integrity in the data. Then looked for outliers (and probably hit them in the head with a hammer).
I’ve heard several examples of it working the other way. Campbell’s Soup comes to mind right now. They went through a significant process documentation and data definition project with the main “units”. Set the processes and definitions, then implemented the system. All the other units not in that core group were forced to live by the rules set by the core group, but that core represented 80%+ of the total users of the system.
It can work either way, both can be hampered by delay through too much analysis and worry over the weeds (and those Exec drive-by issues). Both require strong leadership from the top, a clear plan, and the guts to stick to the plan no matter what.
Ron: I wonder if an imposed software package, where everyone follows the process rules as defined by the software, works in the long term? My experience is that people don’t own things that are imposed, so in the long run the adherence will tend to degrade and processes will revert to old ways, if not outright sabotage.