Technology

All Volatility, All the Time


It's Alive: The Coming Convergence of Information, Biology, and Business

by Christopher Meyer and Stan Davis

Chapter 1: Learning from Life Cycles

(Part 2, click here for Parts 1 and 3)

Permanent Volatility

An analysis of the history of technology shows that technological change is exponential, contrary to the common sense, "intuitive linear" view. So we won't experience 100 years of progress in the twenty-first century -- it will be more like 20,000 years of progress (at today's rate).

-- Ray Kurzweil in The Law of Accelerating Returns, March 7, 2001

[I]t is time to hail the new age of volatility.

-- "Learning to Swing," The Economist, August 8, 2002

Place an order online, and your confirmation appears in your inbox before you go online. Try to have your Walkman repaired, and you find that it's been replaced by a newer model. And the expectation of today's customers is that any feature they've seen anywhere should be available everywhere instantly. Formulate a business strategy or a new product-development cycle, and your plans are superseded by events before you can implement. The time between internal management changes and external responses is shorter and shorter. And the degree of unexpected disruption is greater.

As we said earlier, it's not just your perception -- the rate of change is genuinely accelerating, the world is genuinely less predictable, and the swings in demand, mood, and prevailing wisdom are genuinely more volatile. And it's not just recession, the dot-com bubble, the aftershocks of 9/11, or the spate of corporate scandals. Change has become more rapid and volatility permanent. If you doubt it, consider the following indicators:

Accelerated Change

The number of Fortune 300 CEOs with six years' tenure in that role has decreased from 57 percent in 1980 to 38 percent in 2001.

In 1991, the number of new household, health, beauty, food, and beverage products totaled 15,400. In 2001, that number had more than doubled to a record 32,025.

From 1972 to 1987, the U.S. government deleted 50 industries from its standard industrial classification. From 1987 to 1997, it deleted 500. At the same time, the government added or redefined 200 industries from 1972 to 1987, and almost 1,000 from 1987 to 1997.

In 1978, about 10,000 firms were failing annually, and this number had been stable since 1950. By 1986, 60,000 firms were failing annually, and by 1998 that number had risen to roughly 73,000.

Increased Volatility

From 1950 to 2000, variability in S&P 500 stock prices increased more than tenfold. Through the decades of the 1950s, 1960s, and 1970s, days on which the market fluctuated by three percent or more were rare -- it happened less than twice a year. For the past two years it happened almost twice a month (Figure 1-3).

The number of firms that take "special items" in their accounting has grown dramatically. The number of S&P 500 firms declaring "special losses" has grown from 68 in 1982 to 233 in 2000. Special items are, by definition, an admission of being caught flat-footed by change more volatile than the normal course of the business cycle.

We need to stress that our argument here contains two distinct points. The first is that change has accelerated. That means whatever trend you look at will be proceeding more rapidly. Volatility is the degree of variability around a given trend. Our second point is that volatile events are of greater magnitude and occur more frequently. These reinforce each other, but they're not the same thing.

Connectivity and the Change in Change

What could cause a permanent increase in volatility and the rate of change? While no single answer provides the whole explanation, one clear cause is connectivity. Without belaboring the well-known point, connectivity has transformed our world:

In the six years starting in 1996, the percentage of the U.S. population online grew from 14 percent to almost 52 percent.

The maximum speed of connection in 1940 was about 1,000 bits per second; by 2000, it had reached 10 trillion bps.

The number of Internet hosts -- important as a measure of the information a person can connect to -- rose from several hundred in 1981 to about 100 million in 2001, while the cost of ISP service fell by a factor of 10 million.

The cost of a three-minute phone call between New York and London fell from $300 in 1930, to $60 in 1960, to about $1 today (in constant dollars).

In 2002, the number of mobile phones worldwide reached one billion.

These leaps in the mobility of information make it possible to disseminate new ideas more quickly and cheaply than ever before. When information is codified and information technology modularized, upgrades, add-ons, plug-ins, and innovation can all happen quickly. The ease of adopting (or copying) software drives the pace of change, as does the ease of global communication, which enables rapid learning and transfer of know-how.

Connectivity is clearly a root cause of the acceleration of change, but in a more subtle way, connectivity must also be held accountable for increased volatility.

Every jump in connectivity -- from clipper ships, to railroads, to telegraphs, to mobile phones, to BlackBerries -- has shrunk the globe in space, in time, and in the effort required to support interactions among people, companies, and ideas. (Okay, maybe not the phones with cameras, but be patient.) Every jump has contributed to shrinkage in cycle time, as well as an increase in the rate at which ideas spread.

Connectivity between ideas creates the next new product, and connectivity of companies creates the next merger and change in industry structure. Connectivity between buyers, sellers, supply chains, and financial institutions shortens both the marketing cycle ("awareness, interest, purchase" is as fast as "see the ad, go to the website, research it on the Web, and order online"), and the order-to-cash cycle.

Connectivity is clearly a root cause of the acceleration of change, but in a more subtle way, connectivity must also be held accountable for increased volatility. Greater connectivity in information systems increases both the speed of communications and the permeability of boundaries that were once much more dificult to breach. This means that a signal created in any market, society, or system can propagate faster and travel farther than ever before, meaning that the climate in Brazil affects the price of coffee on the shelves more quickly and in more parts of the world. And though in a given network we can take steps to reduce the swings, we can never know when some newly made connection will create an unanticipated instability.

When networks become intensely connected, they start to become "nonlinear." Small changes can lead to disproportionately large effects. In short, they make our world more volatile. The huge power blackout that struck the northeastern United States on November 9, 1965, was caused by a single circuit breaker in Ontario, Canada, that was functioning normally. It did its job, which was to shut down power on a segment on the network. As expected, this caused a power surge that propagated to the parts of the system connected to it. What was not understood at the time was that the configuration of that network would amplify that surge, eventually leaving 30 million people in eight states and Canada in the dark. Today, it might be possible to simulate that network suficiently to have found this glitch, but the principle remains: The more connected any system becomes, the harder it is to anticipate all such risks.

As more software functions autonomously, the risks escalate. Software viruses, in particular, represent the insidious side of autonomy. On November 2, 1988, at a time when the Internet was confined largely to universities and research labs, Bob Morris Jr. released into the Net a piece of code that could propagate itself from one PC to the next and reproduce with such enthusiasm that no capacity was left for the user. Six thousand computers were affected -- then a large fraction of the Internet population -- at a cost estimated at $10 million. Twenty-two years later, the I Love You virus cost an estimated $10 billion.

Another example of connectivity and autonomous software driving volatility comes from the financial markets. On October 19, 1987, the New York Stock Exchange lost 23 percent of its value in a single day, trading 600 million shares, nearly double the previous record volume. This cataclysm was the result not of an act of terror or even bad economic news but, rather, of the connection, in a logical sense, of a set of trading instructions that had been programmed into the accounts of institutions and individuals.

Black Monday was the wakeup call, and it led to steps by the securities exchanges to put the brakes on when such volatility starts to occur. The trend to volatility has continued, and been incorporated into investor expectations (Figure 1-4). "Mr. Market's mood swings have become more violent," The Economist concludes. "[I]t is not just price gyrations that have increased, but the volatility of volatility itself."

The Adaptive Imperative

With autonomous software and a high degree of connectivity giving rise to big, unexpected swings and nonlinear effects, volatility will continue to surprise us, though seldom in the same way twice. While change and volatility are hard to separate when they are happening, we'll use both terms to refer to the core point: The increased rate of change in the economy poses the "adaptive imperative." To survive, business must learn to adapt as fast as the business environment changes. According to futurist Paul Saffo, "Business as usual has become business as unusual: unpredictable, unplannable, and above all, unmanageable.... the stately equilibrium of Keynes has yielded with a vengeance to the unnerving creative destruction of Schumpeter."

Now we need to change our framework again, from one in which even these uncertain decisions are permanent, to one in which the costs and implications of continuing change are integral.

At some level, volatility has always been a part of the human condition, but our worldview and our business models belied that. We tried to forecast, always looking for the perfect plan. Then we acknowledged that there was no single best way, only probabilities and the art of decision-making under uncertainty.

Now we need to change our framework again, from one in which even these uncertain decisions are permanent, to one in which the costs and implications of continuing change are integral. This will take us from physics to biology, from engineering to evolution, from the top-down to the bottom-up, and from narrow efficiency to adaptability.

There are already examples of companies trying to act on the adaptive imperative. Businesses spent the twentieth century squeezing the fat out of industrial production, tuning processes to accomplish fixed tasks ever better, faster, and cheaper. To achieve this, however, they standardized their repertoires and put little weight on flexibility. Business became brittle. When MCI began taking customers from AT&T in droves with its "Friends and Family" marketing program, AT&T was unable to respond because its billing system wasn't built for such an offer. Worse, it wasn't built to be changed at all. Companies trying to create new strategies and capabilities are continually thwarted by the limitations of their systems. What Churchill observed about architecture is even truer of business processes: "First we shape our buildings; then they shape us."

As the costs of change become the regular costs of doing business, people's roles in organizations are shifting from doing work to managing the evolution of their companies' capacity, whether by creating new software or new relationships. There's nothing wrong in this -- it means more interesting, less repetitive jobs. But it's time to acknowledge that change is not the exception, and that the costs of change are not a small part of total costs.

Quite the contrary: The costs of labor and materials that we worked so hard to minimize in the past have become much less significant, while the fixed cost of infrastructure that supports the business have become dominant. This shift has been going on for decades as we moved toward a service economy. The major cost of running the airline, the car-rental company, the franchise chain, and even the automobile company is in the management system that supports it.

Every time something changes in a business environment -- a new technology, a new market expectation, or a new competitor -- there's an opportunity to make a change in the business. As AT&T found out, however, if the fixed costs are supporting an equally fixed infrastructure, change doesn't happen.

Compare this with the speed of change at Amazon.com, which seems to introduce a new interface, program, or feature every week. Amazon, bred in the fast-changing environment of the Net, was built to respond rapidly to environmental volatility. AT&T, reared in an environment of regulatory oversight and forty-year depreciation schedules, was not. AT&T adapted beautifully to the environment created by the Communication Act of 1934. Every enterprise either adapts to its environment, or dies.

As the environment changes more rapidly, the costs of adapting become an ever-larger part of the total. The costs of never-ending product development as at Netscape or Microsoft, of parallel development teams as at Intel, and of "special" projects at every business are a mounting proportion of the costs of doing business.

Market change is so relevant that it becomes the natural environment, the water to the fish.

In July 2002, for example, IBM opened a $2.5 billion chip factory in East Fishkill, New York, the company's largest capital expenditure ever. This flies in the face of the current trend of relying on the assets of others. Why didn't IBM just buy chips from a fabricator in Asia? "To play to win in technology, you innovate and lead," IBM CEO Samuel J. Palmisano told the New York Times. "What we call the lab-to-fab time should be as close to zero as possible," according to John Kelly, senior vice president in charge of IBM's technology group. The closer the fabrication's cycle time gets to zero, the less disruptive is the market's unpredictability. This doesn't mean that volatility is made irrelevant. Quite the contrary: Market change is so relevant that it becomes the natural environment, the water to the fish. Kelly continued, "The core of our strategy is to lead in technology ...if our strategy were anything but to be on the leading edge, we'd have put the plant in Asia."

IBM is spending extra money on the plant itself, and thus raising the unit cost of each chip it will produce, in order to have a better chance of being faster to market. Given a strategy of technological leadership, as well as the volatility of the chip business, the benefit in time of being close to the company's labs in Westchester County is worth paying for.

There's a second level to this story, and it's about flexible, adaptive manufacturing. If IBM has miscalculated the demand, it will suffer badly. High operating costs and depreciation on a huge capital investment will drag down earnings. But industry analysts say that the plant is likely to be insulated from a fall-off in one or a few segments of the semiconductor market. It is highly automated and designed to shift flexibly to produce many different kinds of chips to suit demand. "The diversity is the big difference with this plant," said Richard Doherty, director of The Envisioneering Group, a technology-assessment and research company.

IBM has devised a solution to the impossibility of forecasting demand. The new approach is to stop guessing about the future, and to build so as to adapt to it by creating a diverse set of capabilities. The intent is to deal with a volatile market, protect IBM from flux in demand, and build an adaptive factory, one that can manufacture a diverse portfolio of chips for everything from mainframes to cell phones to video game consoles. The previous generation of manufacturing stressed the "focused factory," designed to minimize unit cost by doing just one thing superbly. Presumably forever.

CEMEX, the world's third-largest cement company, faced a different adaptive imperative: intractable volatility. Fresh cement has a shelf life even shorter than that of fresh fish. Once the mixture is turning in the truck, the driver has only a couple of hours to deliver the load. Now imagine making an appointment to deliver cement to a construction site in Mexico City. The job may be behind schedule; traffic tie-ups may intervene; workers may not be available to receive the shipment.

In response to the risks of spoilage, cement makers in Mexico once charged their customers high fees to reserve a time for delivery, and even higher penalties if they were unable to take delivery as scheduled. The relationship between suppliers and customers was adversarial, costs were high, and service was poor.

CEMEX developed an adaptive solution: Treat the cement trucks like taxicabs. Station them in appropriate areas around the city, and have them respond to customers when summoned. Customers don't have to forecast, CEMEX doesn't have to commit extra resources, and the scheduling and late fees go away. CEMEX learned not to fight the volatility but, rather, to adapt to it. As a result, the company's guaranteed on-time delivery window has gone from the market-standard three hours to just twenty minutes, and it delivers loads within that window 98 percent of the time.

IBM's new plant design and CEMEX's cruising cement trucks are two examples of what we call adaptive management. Information technologies (intelligent machines in IBM's case, radios in CEMEX's) support many such solutions throughout industry. Many business thinkers have noted this trend, including us in our 1998 book Blur. Here's the new wrinkle: As volatility and the cost of managing it become the new imperative, we need more than point solutions. We need a set of principles that support a comprehensive adaptive approach to management. We'll be developing this idea in Chapter 5, and analyzing Adaptive Enterprises in all of Part III. Yet before we're ready for that, we need to understand that the adaptive imperative is only one of two economic changes of the next ten years. Let's look at the second one.

Part 3 of this excerpt discusses how researchers' deepening grasp of life's tiniest elements is shifting the economy's makeup

From the book: IT'S ALIVE: The Coming Convergence of Information, Biology, and Business by Christopher Meyer & Stan Davis. Copyright (c) 2003 Cap Gemini Ernst & Young U.S., LLC. Published by Crown Business, a division of Random House, Inc.

Christopher Meyer is director of the Center for Business Innovation in Cambridge, Massachusetts. He is also a founder of Bios Group, Inc., a Santa Fe?ased venture that develops applications of complexity theory for business.With more than twenty years experience in general management and economics consulting, he is an authority on the evolution of the information economy and its impact on business. He was listed among Consulting Magazine's "25 Most Influential Consultants" in 2001, and served on Time's Board of Technologists in 2002.With Stan Davis he was co-author of Future Wealth, published in 2000, and Blur, published in 1998.

Stan Davis is a Senior Research Fellow at Cap Gemini Ernst & Young's Center for Business Innovation. His consulting has included senior executives at Apple, AT&T, Bank of America, Cap Gemini, Ernst & Young, Ford, JPMorgan Chase, KPMG, Marriott, Mercedes-Benz, Met Life and Sun Microsystems. Davis is advisor to the board of the Massachusetts Medical Society, which publishes the New England Journal of Medicine.

Davis has shared his expertise in twelve books. His most recent book is Lessons from the Future. His 1998 Blur was a BusinessWeek bestseller, and his 2020 Vision was named the best management book of 1991 by Fortune.


Ebola Rising
LIMITED-TIME OFFER SUBSCRIBE NOW

Sponsored Links

Buy a link now!

 
blog comments powered by Disqus