Evgeny Morozov discusses cybersecurity and how software meant to help Iranian political dissidents put them at risk
(Corrects the first paragraph to refer to Moldova, not Ukraine.)
Evgeny Morozov is a skeptic in a world of Internet believers. His forthcoming book, entitled Net Delusion: The Dark Side of Internet Freedom, delves into how governments are using technology to silence dissenting voices and shape public opinion. In his blog, Net Effect, Morozov—currently a visiting scholar at Stanford University—has dissected Moldova's Twitter revolution and chronicled Google's (GOOG) contretemps in China. In a series of posts starting in September, Morozov exposed gaping flaws in an anticensorship program called Haystack. The software purportedly let Iranian political dissidents access banned websites without detection. Haystack, the brainchild of 26-year-old Silicon Valley entrepreneur Austin Heap, received glowing media coverage and was fast-tracked for a government license to bypass U.S. sanctions on Iran. But when Morozov and software expert Jake Appelbaum revealed technical and security gaps in the software that could put its users at risk, Heap withdrew the product. His fledgling company's lead software developer resigned, admitting that Haystack amounted to a case of "hype trumping security." Morozov discusses with Bloomberg Businessweek's Caroline Winter the Haystack controversy, as well as his views on cybersecurity. What was wrong with Haystack's software? Haystack simply didn't do what it claimed. Many of the Iranians who were testing it in Iran did not succeed in opening those banned websites. On top of that, it contained one particular security flaw in its design, which could have allowed the Iranian government to track down anyone who has ever tested Haystack in Iran. The people who discovered the flaw have no reason to point out how exactly it works, because that would tip off the Iranian government. But Haystack's founders were told about it, and this is why they decided to withdraw the software. Are there other products that purport to help people fight censorship, but do more harm than good? It all depends on how you define harm and good …. None of these technologies are … are perfect. You can trace some of them from the websites being accessed and some from the IP addresses being accessed. The problem with Haystack was that they aggressively advertised themselves as not only secure but also as software to be used specifically by Iranian dissidents, which of course raised the stakes. It actually gave the Iranian government additional incentive to start tracking down users. Most of these technologies you can use for anything. They're just there to access banned resources. Pornography is banned in many of these countries, so people use the tools to access pornography, which doesn't bother the government too much. When you position your [software] as a tool to be used solely by dissidents to overthrow the government, of course it's a completely different level of risk you are pushing on your users. And if you don't provide safe architecture and design to go along with that, you're probably putting people under too many risks. What, if anything, should idealistic and digitally skilled individuals do to help in the fight against censorship? First, it would help to get acquainted with actors in the [business]. It's not as if no one had this idea 20 years ago about bypassing censorship and opening up resources for the Chinese and the Iranians to the outside world. Plenty of NGOs have been working in this [area] for decades, and many of them actually have the regional experience. You have competing projects such as Tor, which is a tool that does a lot of things that Haystack does, but much better. They've been around for probably a good decade.
The problem in Haystack's case was that those guys started without any understanding of how the Iranian police and security people work, what they look for, and how they go about identifying dissidents. [Haystack] never even had what security experts call a threat model. So they never even conceptualized or thought through what risks their users were likely to be under when using Haystack. Should democratic governments have a role in monitoring and vetting this type of software for export? It's not a question of monitoring software. I don't think the government should be vetting any of that. I don't think it's appropriate ….It's the murkiness of the sanctions that causes the problem. Yet it seems like the government played a role in making Haystack appear more secure than it was. It wasn't because they didn't do a proper job reviewing how Haystack works. They never intended to review how Haystack works. What the Haystack people did is, they said, 'Hey, we got a license from the U.S. government that means we have the safest tool out there,' which was never the intended meaning. But since the sanctions are so complex, no one could actually dispute that. How do the groups that create anticensorship software make money? None of them are making money. They are all eating money [public funding] coming from the government. Some of them actually make some money by aggregating and selling user data. They don't disclose everything, but they sell trend data. If you look up some of the Chinese groups, the Global Internet Freedom Consortium, they do—and they don't hide it. You can probably read it in their terms of service. They will sell the data to marketers to show which websites are accessed, and how, and so on. You won't be able to track individual users … But even the Chinese tools, they still get funding from the U.S. State Dept. and various foundations. What are the important steps to take to test a tool like this for use in countries where safety is an issue? Make sure that someone who is knowledgeable actually looks at your design and make sure you understand what it is that you are trying to deliver—whether it is safety or access to websites. Then look at how well your own disclaimers reflect reality. In Haystack's case, they actually said [the software] is perfectly secure and no one can break into it. It gave users a false sense of safety, which of course wasn't there.