The Challenge of Policing Facebook
Are our institutions up to the challenge of protecting users from information-age problems? This is the high-level question emerging from the Facebook-Cambridge Analytica debate. While on one hand Facebook and similarly-situated companies will pay some regulatory price, our public institutions are also in the crosshairs. In the U.S., the much-praised and admired Federal Trade Commission (“FTC”) approach is suffering a crisis of legitimacy. Facebook’s European regulator, the Irish data protection commissioner, is losing both control over its supervision of American companies and the respect of its regulatory colleagues. In a recent press release, the Article 29 Working Party announced that it was creating a working group focusing on social media, never mentioning the Irish in its statement.
In this essay I explain the challenges the FTC faces in enforcing its 2012 consent agreement against Facebook and suggest ways it could nonetheless prevail. In the long run, everyone wins if our civil society institutions can police Facebook, including the company itself. While Facebook’s privacy problems have long been dismissed as harmless, advertising-related controversies, all now understand Facebook’s power over our broader information environment. After Brexit, the 2016 U.S. election, and violence in Myanmar, if consumer law fails, we risk turning to more heavy-handed regulatory tools, including cyber sovereignty approaches, with attendant consequences for civil society and internet freedom.
FTC and Facebook
Facebook is already under a FTC consent agreement with broad restrictions to fence in aggressive data use. The 2012 agreement was a major victory for the FTC because it firmly established the notion that the information-age bait and switch, those privacy-eroding settings changes that are pushed onto users, are not acceptable without opt-in consent. The FTC understood the app problem and crafted its first and fourth complaint counts against Facebook data transfers to developers.
When Facebook settled the case, a wide range of critics pointed out weaknesses in the agreement. Republican-appointed Commissioner Rosch objected that Facebook denied all the facts in the case, and that it was not clear to him that the decree covered information transfer practices of apps. But the FTC–led by the privacy and consumer protection teams–dismissed these arguments. It promised close supervision and a willingness to levy penalties. The Commission said it was clear that Facebook would be on the hook for application conduct that contravened Facebook representations. A Twitter chat led by a senior FTC privacy lawyer, along with responses to public commenters concerned about apps, assured the public that app misconduct was the kind of foreseeable risk that Facebook had to address in its privacy program.
But these statements do not have any force of law. Instead, the FTC’s ability to fine is tethered to the settlement agreement, particularly to the specific provisions that Facebook promised not to violate. Enforcement is not handled by the privacy team, but rather by a different division with expertise in interpreting agreements. Turning to that agreement, advocates of civil penalties against Facebook point to three theories, but none of which is easy to bring.
First, part one of the agreement broadly bans Facebook from misrepresenting how it distributes personal data. Here, the FTC will bear the burden of showing that Facebook’s opt-out settings for friends’ data transfer to apps were misleading. If this information were particularly sensitive, such as location data, it would be an easy case, but here it was only basic profile information that was subject to opt-out. American law subjects many forms of information to opt-out instead of opt-in standards. For instance, even financial information can be transferred on an opt-out basis. Why then would it be misleading for Facebook to make basic profile information subject to the same standard?
Second, part two of the agreement seems to impose a broad, “affirmative prior consent” restriction on transferring user data to third parties. Might Facebook have violated this consent standard in transferring data to developers? Probably not, because Facebook negotiated a special loophole to reserve the ability to create a “retweet” function. The loophole immunizes Facebook from users forwarding on information about other users. Until 2014, Facebook’s default settings allowed such transfer to apps, and so the plain wording of loophole seems to eat up the affirmative consent requirement.
A third avenue for enforcement comes from Facebook’s promise to create a comprehensive privacy program and mitigate risks. Here, some argue that Facebook’s failure to audit developers was an unreasonable oversight. But it is common practice to entrust other commercial parties with confidential information, subject to contractual promises that data will be deleted. It will be difficult to convince a court that it was “reasonable” for Facebook to impose expensive, time-consuming audits on thousands of developers.
If one of these theories works, imposing civil penalties is still tricky because of how the FTC “counts” violations. The failure to audit might be counted as a single wrong triggering a five-figure fine. If however the FTC has the will to argue that many millions of people were misled by Facebook’s settings, the fine could be ten-figures.
Of course, it would be economically rational for Facebook to litigate against even a seven-figure penalty, which raises yet another FTC pathology: the FTC is a risk-adverse litigator. It wishes not to litigate because cases soak up resources and if the agency loses, bad precedent can harm all the agency’s efforts. Thus, the three theories for enforcement must be very strong for the FTC to risk bringing a case.
Facebook’s reputation for scorched-earth litigation also adds to the FTC’s caution. Consider that in an European proceeding, Facebook sought dismissal of a case because the court failed to follow formal Dutch language requirements. How? The court used the English words “browser” and “cookie” instead of the Dutch “internetsnuffelaar” and “koekje zijn.”
Deterring Facebook
Despite these challenges, I am optimistic about the FTC’s renewed pursuit against Facebook. The investigation is likely to uncover new, unrelated wrongdoing that will give Facebook strong incentives to agree to broader terms and even pay penalties. But will this deter Facebook from further data depredations? I think the FTC needs to do two things to deter Facebook: first, the FTC must understand and tailor interventions to fit Facebook’s psychology, and second, to impose personal liability on Facebook’s leaders.
Turning first to Facebook’s psychology, in a recent essay, former FTC director of consumer protection and Georgetown Law Professor David Vladeck, drew a distinction between “clueless” and “venal” respondents. Vladeck implies that Facebook is venal and suggested a series of reforms to rein in Facebook. Vladeck’s critique, insightful and trenchant, however, was misplaced.
Facebook is a founder-controlled company. Its leader is not venal; he is ideological. He simply thinks that privacy is irrelevant in light of the benefits, the powerful learnings that are at hand because of information “flow.” This ideology is identified as “dataism” by Yuval Noah Harari, but to understand its full consequences, one needs to venture into fiction, in books such as Joshua Cohen’s Joycean feat, Book of Numbers. Through those lenses, one sees that to a dataist, information flow and sharing are the categorical imperative. They trump enlightenment values surrounding personal autonomy and privacy.
This raises the question–how to deter a founder who simply does not believe in privacy, one who posts an announcement of his wife’s miscarriages online with paeans to an “open and connected world?” The answer is that the FTC has to either get Zuckerberg to sublimate privacy values, or it should craft an intervention that causes him to lose control over his company.
Second, startup companies’ leaders frequently face “individual liability” under the FTC Act because in small organizations, executives control so many decisions. If the executive directly participates in a deceptive act, such as approving a fraudulent ad, the executive can be named individually in a suit. Large companies rarely face such liability, but there is good reason to believe that the FTC can impose individual liability on Zuckerberg and other high-level Facebook executives. Consider Facebook executive Andrew Bosworth, who in an internal memo, wrote that “connecting” people was a categorical imperative justifying “[a]ll the questionable contact importing practices. All the subtle language that helps people stay searchable by friends…”
The FTC could see such exhortations as direct participation in deceptive acts and impose terrifying levels of personal liability on such executives. The deterrent effect would be both specific and general, and would deny Zuckerberg the ability to continue aggressive privacy invasions, as his lieutenants would fear having a FTC order attach to them personally for 20 years.
Conclusion
Our society has awakened to the reality that privacy isn’t just about advertisements. It is about the quality of our information environment, with knock-on effects on autonomy and our republic itself. As a result, the stakes have increased, for Facebook, and for the institutions that police it. I hope these institutions can rise to the challenge, in part because keeping these conflicts in the scope of consumer law has benefits to civil society and expressive freedom. The FTC can deter Facebook’s data depredations by crafting interventions focused on its leadership, because of their specific efforts to deny individuals privacy.
Chris Hoofnagle holds dual appointments as adjunct professor in the Berekley School of Law and the School of Information and is the author of Federal Trade Commission Privacy Law and Policy.
Suggested citation: Chris Hoofnagle, Facebook in the Spotlight: Dataism vs. Personal Autonomy, JURIST – Academic Commentary, Apr. 20, 2018, http://jurist.org/forum/2018/04/chris-hoofnagle-facebook-dataism.php.
This article was prepared for publication by Kelly Cullen, JURIST’s Managing Editor. Please direct any questions or comments to him at commentary@jurist.org