Already a Bloomberg.com user?
Sign in with the same account.
Online data privacy has been in the spotlight for a variety of reasons over the past year, from Facebook's privacy settings to government subpoenas for WikiLeaks data. Before Congress, regulators, and courts can give the issue legal clarity, they will need to answer some fundamental questions about which areas of law even apply.
A panel on data privacy earlier this week at the Consumer Electronics Show laid out the broad issues that need to be determined before any meaningful attempts at institutional reform can get underway. Central among them is the question of whether online privacy is a matter of personal property or of human rights.
Rep. Marsha Blackburn (R-Tenn.) kicked off the CES discussion, explaining that Congress is looking at regulating online consumer privacy, but that it first needs to figure out what exactly is meant by data privacy, what precisely it wants to regulate, and how to balance protection for consumers with protection for emerging commerce. Determining the latter two should be relatively easy—those are the questions inherent in any lawmaking process—but answering the first question could be a struggle.
The crux of the issue is whether or not an online persona is an extension of a human being—as Marc Davis, a partner architect in Microsoft's online services division, believes—or a mere collection of bits that can be bartered away for access to free e-mail or a social network. Davis sees the issue of data privacy as nothing less than defining what it means to be a person in a digital world.
Beyond the issues of storing and mining data, there are questions about who or what entities have the right to publish readily available public data about individuals and what it means to have digital identities that individuals might not even have created—and which will live on after they die.
Fred Carter, senior adviser to Ontario's Office of the Information and Privacy Commissioner, boiled it down to how we characterize personal data. Whereas the U.S. government and U.S. citizens tend to view data as a property issue (i.e., we own our data and we'll do with it what we please), the rest of the world views it as a human rights issue (i.e., there are defined limits to what Web companies can and cannot do with people's data). That's a big distinction: Although we can contract away property rights, basic human rights are not legally negotiable.
Consumers frequently get burned by this distinction. As Electronic Frontier Foundation Senior Staff Attorney Marcia Hoffman pointed out, website terms of service are non-negotiable, putting consumers in a weak position. If they want to use a service, they agree to the terms, period. As long as Americans and the law treat personal data as property, sites such as Facebook can essentially grant themselves whatever rights they please to our data so long as we sign up. Companies like Google (GOOG) can collect whatever they want as we pass through their expansive web presence.
It would seem that the ideal solution is to find a middle ground, a way to preserve the freedom that comes with having property rights in data while placing limits on how we can convey those rights, or the methods by which we can do so. U.S. citizens, keen on individual rights, likely wish to maintain the idea that personal information is their property. They need a voice in the negotiation over how it's used, other than deciding whether or not to use a Web service.
The Federal Trade Commission has ideas about how to regulate data collection online and the Commerce Dept.has suggested an online Bill of Rights, of sorts. Congress clearly has some deep thinking to do about what data privacy really means before it can think about regulating it.
Also from GigaOM: