Why Is the Government Vulnerable to a Simple Cyber Attack?
Posted by: Stephen Wildstrom on July 09, 2009
A wide-ranging attack on government and corporate Web sites that began last weekend and is continuing seems, at least so far, to be causing more confusion than damage. A denial of service (DoS) attach hit a number of government and business sites in the U.S. and South Korea. Some successfully fended it off, others were crippled to varying extents for varying periods of time. The attack is only designed to slow or block access to sites, not penetrate them, so there is no danger to data and the main effect is inconvenience for users.
Contrary to widespread reports that seem to have originated in the South Korean government, little evidence has come to light to suggest that North Korea is behind the attack. That’s not to say the North Koreans don’t have something to do with it, just that the evidence is lacking.
But whoever is behind this, it is disturbing to learn that a number of government agencies are still vulnerable even to a relatively unsophisticated attack, one that most Web-savvy businesses have long since learned to deal with.
The SANS Institute, a Bethesda, Md.-based security research organization, has been tracking the attacks closely and has managed to scrape together some information on what is going on. The attacks, like most bad stuff on the Internet, were launch from botnets, remote-controlled networks of compromised personal computers that can spew spam or jam Web sites. In this case, the bots tried to bring down Web servers by hitting them with a flood of connection requests.
SAN's tracking efforts have found that the command and control computers that run the botnets are located in multiple countries. But since the command and control computers can themselves be remotely controlled, this does not necessarily tell us much about the origin of the attack. The bots themselves are located all over the world, including in the U.S. The attacks have only involved part of the botnet at any one time. This means the server requests are not coming from a static group of computers, making the defense somewhat more difficult.
SANS researchers have been watching the attacks grow more sophisticated since they began nearly a week ago. The standard response to a DoS is to stop the flood of phony connection requests as far upstream as it can be detected. The initial assaults were easy to stop through standard packet filtering techniques but as time has gone on, the malicious traffics has been made to look more and more legitimate. This makes filtering harder, but not impossible.
Probably the most troubling thing learned so far is how poorly prepared the government continues to be and how weak its defenses are against a common form of attack. Says SANS research director Alan Paller: "The most important lesson learned: too many Federal agency security people did not know which network service provider connected their web sites to the Internet so they could not get the network service provider to filter traffic. As a result Homeland Security Dept.'s US-CERT will (probably) establish a (non-public) registry for federal web sites where they maintain up-to-date information about which providers are responsible for the content (because of SQL Injection errors that let federal sites infect visitors) and the network access so they can act much more quickly to help agencies under attack."