Privacy Harms: Don’t Forget That Not Everyone Agrees About What’s Creepy
Ed note: I wrote this in 2018 and published it to an earlier version of this blog. The content is still relevant, so I'm reposting it here.
tl;dr Every Product Counsel has her own perspective about what constitutes a concerning and creepy privacy action. That’s fine, but failing to take into account that your users might not see the world the same way will lead you to make mistakes.
Previously, I wrote about two different types of privacy harm, extrinsic (aka real-world) and intrinsic (aka feelings), and the fact that not everyone agrees that both those types of harm matter:
Extrinsic harm occurs when a privacy violation causes some real-world effect. A good example is a data breach that results in identity theft or third-parties forming a negative opinion about the breached person (which likely happened as a result of the Ashley Madison breach).
Intrinsic harm, on the other hand, is harm individuals experience when being observed. It is often described as the harm of feeling uncomfortable, uneasy, or creeped out. A good example is a data breach that doesn’t result in any adverse consequences or the first time you realized that Facebook had a two-hundred-plus-item profile of you.
While the legal scholarship uses the terms “extrinsic” and “intrinsic” harm, for simplicity and lack of pretension, I’ll call them “real-world” and “feelings” harm.
. . .
A significant number of people don’t believe that feelings harm matters.
In that post, I explored how those differing conceptions explain why legislation has stalled. Here, I want to dig into how a Product Counsel with a real-world harm view of the world can mistakenly overlook how a feelings harm person would react to a product decision, and vice versa.
Real-world harm folks forgetting about feelings-harm folks: Evernote’s privacy policy debacle.
Evernote, following the trend of so many other tech companies, decided that the next evolution of its product was to build AI features that would help its users become more productive. But, to train its AI algorithms, Evernote wanted to give its employees access to user content.
Evernote’s lawyers were thoughtful about granting this access. They updated Evernote’s privacy policy to notify users of this new access and implemented a bunch of safeguards. Specifically, they would:
- limit access to a handful of employees;
- cut documents into snippets of text;
- pseudo-anonymize the data by removing names and email addresses;
- only sample a tiny percentage of users’ data; and
- allow users who still objected to opt out.
From a real-world harm viewpoint, Evernote did everything right—it mitigated almost all the risk of any real-world downstream consequences. But it didn’t adequately consider how people would feel.
And so, upon announcing its new privacy policy, Evernote became the subject of one of the biggest privacy-policy backlashes in recent memory.
And thus began the backtracking. Evernote’s CEO, Chris O’Neill, tried to assuage the concerns with a blog post. That didn’t work. One day later, Evernote retracted the proposed privacy policy. O’Neil stated that Evernote employees wouldn’t review user content without specific opt-in consent, and he issued a personal mea culpa:
The past few days have been deeply humbling for us, but I believe these steps will put us in the forefront of cloud privacy thinking. My promise to you is simple: collaboration and trust. You deserve nothing less.
In the aftermath, the harsh criticism about Evernote’s handling of this update stressed that Evernote didn’t consider how people would feel:
[Evernote] seemed very tone-deaf about what people use the product for and how they might really feel with the idea that a human would be looking at their notes . . . [it’s] shocking that they didn’t know that.
I am not so shocked.
Feelings harm folks forgetting about the real-world harm folks: The Los Angeles City Attorney’s lawsuit against The Weather Channel.
In 2019, the LA City Attorney sued the Weather Channel for the location-tracking and sharing practices of its app. This suit followed on the heels of a New York Times story I’ve written about previously. Essentially, the app allegedly tracks its millions of users’ daily activities and sells that data so that third parties can serve ads to them.
This is one of those privacy stories that, even without any immediate real-world harm, makes most people think, “wait what info are they collecting!?” That’s because it’s so easy to imagine how this information could be misused. Case in point: Hot on the heels of the New York Times story came a story about bounty hunters getting live location data of their targets. Then there’s the concern about this data floating around in the world such that anyone (including the New York Times!) can get ahold of it. Think of the potential for embarrassment, like the self-identifying vegans who’ve made surreptitious midnight trips to In-n-Out Burger.
Yet, in its complaint, the LA City Attorney, to the extent it alleged harm, didn’t mention any of that. Instead, it only talked about harm in a way that would resonate with a feelings harm person (emphasis added):
For years, TWC has deceptively used its Weather Channel App to amass its users’ private, person geolocation data . . . . TWC has then profited from that data, using it and monetizing it for purposes entirely unrelated to weather or the Weather Channel App.
To a real-world harm person, this isn’t convincing or outrageous. In fact, it’s a good thing—the Weather Channel created economic value without harming anyone.
Here’s what I think happened: the harm the complaint-drafters mentioned—that the Weather Channel profited by using data “for purposes entirely unrelated to whether/weather or the Weather Channel App”—was sufficiently egregious (i.e., that makes us uneasy!!!) that they didn’t feel they needed to elaborate on why this collection of location data is particularly objectionable. That is, they didn’t think they needed to mention the stalkers, the bounty hunters, and the vegans.
But complaints like that have failed over and over again in the courts because judges, for a bunch of reasons, tend to be unsympathetic to the feelings-harm view of the world. Here are some representative cases from the data-breach litigation world:
Reilly v. Ceridian Corp., 664 F.3d 38, 40, 43 (3d Cir. 2011) (finding that increased risk of identity theft is too speculative a harm in a case involving the theft of personal data); Peters v. St. Joseph Servs. Corp., 74 F. Supp. 3d 847, 849–50, 854–55 (S.D. Tex. 2015) (same); Storm v. Paytime, Inc., 90 F. Supp. 3d 359, 366 (M.D. Pa. 2015) (same); In re Sci. Applications Int’l Corp. (SAIC) Backup Tape Data Theft Litig., 45 F. Supp. 3d 14, 25, 28 (D.D.C. 2014) (same); Polanco v. Omnicell, Inc., 988 F. Supp. 2d 451, 470–71 (D.N.J. 2013) (same).
Risk and Anxiety: A Theory of Data Breach Harms by Daniel J. Solove and Danielle Keats Citron, n.3, 96 Texas Law Review 737, 739 n.3 (2018).
Given that body of law, the LA City Attorney could score, at the very least, some major rhetorical and atmospheric litigation points by stressing the fact that this location-data collection, unlike many other data collection practices, could lead to some real-world nasty results. Otherwise, if they pull a judge who only accounts for real-world harm, they might be out of luck.