A Year Later, Sandy Still Has Lessons for IT
Photograph by Scott Eells/Bloomberg
Nearly a year ago, Superstorm Sandy slammed into the New York metro area, destroying thousands of homes and wreaking havoc with the IT infrastructure in southern Manhattan and low-lying areas of New York and New Jersey. It was the costliest and most destructive storm of the 2012 hurricane season and by some accounts the second-most or third-most expensive in U.S. history.
So what have we learned when it comes to safeguarding IT infrastructure?
Quite a bit—at least according to data-center pros who weathered the storm and have some ideas for mitigating damage from similar events in the future, which given what we know about rising water levels, are bound to happen again.
Big data-center providers have contingency plans to cover acts of nature or man, including provisioning food, water, and cots for on-site personnel. But until Sandy, most of these were designed for a few days of staffing. That changed with Sandy, says Raouf Abdel, regional operating chief for the Americas for Equinix (EQIX).
“We will prepare for longer intervals. Before, we’d stock up for a day or two, maybe three, but Sandy showed us we need to go longer,” Abdel says. “We found we needed better sleeping quarters, more water, more food, more coffee.”
Before Sandy, nobody seemed to imagine that highways, tunnels, and subways could be out for days on end. Now there have to be plans in place for how personnel can get to the affected area and for how other personnel can work remotely as effectively as possible.
Most data-center providers know they need to build new facilities on high ground—Equinix sites in Secaucus and North Bergen, N.J., are above the 100-year flood plain, for example. But when it comes to weather-proofing, more is more.
Providers using third-party facilities need to keep a sharper eye on the physical security and stability of those sites. Ensuring that there are berms or other physical barriers to low-lying doors is a no-brainer. Equinix has added waterproof steel doors to prepare for “several more feet of water,” Abdel says.
It’s natural to focus on basements when you think of flooding, but a flat roof can be almost as big an issue. Installation of rooftop gear and piping often leaves perforations that can lead to problems in high winds and rain. So when you assess your physical plant, look up as well as down, says Mark Thiele, executive vice president of data centers at Switch, an Enterprise (Nev.) data-center provider with a SuperNAP in Las Vegas.
Make sure there are updated blueprints and other building documentation on site and available to emergency personnel. In an emergency, they will need them.
Even if you have plenty of fuel for backup generators, it won’t help if the generators themselves or the pumps to supply them get flooded in a basement. If this gear must stay on lower floors, make sure it’s fully encapsulated and waterproofed, says Michael Levy, an analyst at 451 Research.
Since Sandy, service providers should also make sure they have roll-up generators as well as fuel hoses on site and easily accessible, says Ryan Murphey, vice president for operations at PEER 1 Hosting. Oh, and make sure those hoses fit both the generators and the fuel trucks.
And not to belabor the obvious, make sure lots of batteries, flashlights, and headlamps are stocked, and they’re operable and accessible.
While most data-center pros keep an eye on big storms, Sungard Availability Services is making crowdsourced weather-watching a part of its standard operating procedure.
As Sandy developed, Sungard staff watched various weather sites and found the amount and quality of information to be invaluable. For example, the National Oceanographic and Atmospheric Agency sites that simulate storm conditions are “unbelievable,” says Nick Magliato, Sungard AS’s chief operations Oofficer. “If you map NOAA data to tide charts, you can see how a storm surge might manifest, say, down the Hackensack River,” he says.
A combo of NOAA, Google (GOOG) Earth, and government topological maps can show a lot about what the storm surge of a 100-year storm will look like in advance, he says. Instead of managing the generic risk posed by a hurricane hitting the Eastern Seaboard, you can simulate what a four-foot tidal surge would mean to southern Manhattan and low-lying New Jersey. ”And we don’t have to hire an environmental engineering firm to do it,” Magliotti says.
Obviously, in looking for new facilities, it behooves you to make sure they’re on high ground, away from the coastal areas affected by Sandy and, before that, Hurricane Irene in August 2011. Two “100-year storms” in two years prompted lots of conversations about the wisdom of keeping data-center facilities in surge-prone areas.
Omaha’s Scott Data Center has fielded lots of inquiries from New York metro companies in the past few years, says the company’s president, Kenneth Moreano. Major League Baseball, for example, has put its second “hub” at Scott Data Center. MLB’s primary hub—which serves much of its streaming media—remains in New York, but MLB is replicating that capability in Omaha. “Originally, Omaha was going to be their disaster recovery or backup location, but now we will be their first hub outside of New York City,” Moreano says.
At least “one data point” in most of these discussions with New York-area companies is talk of storm-related outages, those infamous “bucket brigades of fuel,” he says. While financial firms that need extremely low-latency connections to trading floors will keep assets close to Manhattan, there’s no compelling reason for the bulk of data-center infrastructure to be in the immediate area given the risks—and given New York’s extremely high real estate prices.
Just some things to think about for next time. Gulp.
Also from GigaOM
Lack of Data Integration Still Holds Back Public Cloud Adoption (subscription required)