In the face of mass digital data harvesting and manipulation, the need for effective data privacy protection is imperative. In Data as Likeness, Professor Zahra Takhshid offers new legal tools to address this need by urging us to reconceptualize one of the common law privacy torts, namely, the tort of appropriation of name or likeness. Her contribution, however, is not limited to reconceptualizing the appropriation tort. She also offers valuable insights into how to secure Article III standing for data privacy harms.
Takhshid’s reconceptualization is built on the insight that “[o]ur digital persona or likeness is our personal data.” Thus, appropriation of our data is an appropriation of our likeness, worthy of compensation through tort law. Takhshid’s reconceptualization turns the appropriation tort into a means to hold Big Tech and others accountable for their ubiquitous collection and transmission of personally identifiable data, which, according to Takhshid, constitute wrongful exploitation of the individual. This approach would also treat deepfake creation, geolocation data collection, and the deployment of facial-recognition technology as exploitations of digital persona.
Takhshid’s approach is built on a nuanced understanding of the development of the privacy torts and identifies their shortcomings in remedying some types of privacy harms stemming from data collection. The tort of appropriation of likeness evolved along two paths—one focused on remedying dignitary harms stemming from unauthorized exploitation of one’s likeness, and one focused on protecting proprietary interests in one’s name or likeness. Takhshid shows that both paths converge upon the protection of identity, and she extrapolates from legal cases involving “look-alikes” and “sound-alikes” the concept of protecting an individual’s “persona.” One’s persona, she argues, includes one’s digital persona, and the unauthorized collection of personal data is therefore a tortious wrong.
According to Takhshid, this adaptation is a necessary step in the appropriation tort’s evolution: “For the appropriation tort to be responsive to the modern technologies and their potential tortious dignitary violations, the privacy law of torts should extend its protection to our personal data, our digital persona.” Takhshid argues that this extension of the tort is justified on both theoretical and practical grounds. Takhshid correctly recognizes that one of the hallmarks of the privacy torts is their adaptability to new harms resulting from new technology. Here, she identifies the harm of having a marketer or a generative AI developer use one’s personal data as dignitary.
The claim that data privacy harms of the sort she describes are dignitary harms is non-obvious. It seems quite clear that the appropriation tort may be used without any significant modification to deter the monetization of a celebrity’s using AI tools, and the same should be true of uses of the personas of private individuals. Yet those harms seem different in kind from some of the other types of data exploitation harms Takhshid describes, such as the unauthorized use of aggregate personal data to train AI, or the collection of composite physiological data from virtual reality headsets. These appropriations may be exploitative in the aggregate, but they do not compromise the individual victims’ abilities to control their images or identities to the same degree as the creation of a digital “look-alike” or “sound-alike.”
Takhshid contends, however, that the individual’s loss of control over personal data through unauthorized use is a cognizable harm worthy of compensation. This wrong, she argues, deserves a remedy. As she explains, “the threshold for what counts as our digital identity and personal likeness is ‘Personally Identifiable Information,’” defined as “any data that is identified or identifiable to a specific living individual.” She further argues that data collection may be unauthorized despite the individual’s assent to a vague privacy policy.
Takhshid’s redesign of the appropriation tort aims not merely to address new types of data privacy wrongs, but also to enable plaintiffs to overcome constitutional barriers to suit rooted in Article III’s standing requirement. Indeed, this may be the most important contribution of the article. Takhshid employs tort theory to confer standing on plaintiffs seeking to recover in federal courts.
In order to have Article III standing, a “plaintiff must have suffered an ‘injury in fact’—an invasion of a legally protected interest which is . . . concrete and particularized, and . . . ‘actual or imminent, not “conjectural” or “hypothetical.”’” In addition, the injury must be fairly traceable to the defendant’s conduct, and it must be likely that the injury will be redressed by a favorable court ruling. Claims based on data privacy harms have sometimes failed for lack of standing, based on the notion that the harms are merely intangible, future harms.
Takhshid centers her analysis on the Supreme Court’s decision in TransUnion LLC v. Ramirez, which involved a class action suit brought pursuant to the Fair Credit Reporting Act. In that case, the credit reporting agency TransUnion erroneously compiled data about individuals linking them to a government terrorist alert; however, TransUnion sent the erroneous information to creditors in only a fraction of the cases. The Court held that the class members whose claims were based solely on erroneous compilation of their data lacked standing. As Justice Kavanaugh wrote: “Central to assessing concreteness is whether the asserted harm has a ‘close relationship’ to a harm traditionally recognized as providing a basis for a lawsuit in American courts—such as physical harm, monetary harm, or various intangible harms including (as relevant here) reputational harm.”
As Takhshid recognizes, the Transunion holding has implications for plaintiffs suing for statutory privacy violations, making it harder to hold the exploiters of data accountable. Thus, her article is preoccupied with identifying data exploitation as a concrete harm sufficient to establish standing. As she writes, “recognizing data protection through the lens of tort law can help many plaintiffs overcome this hurdle of proving standing for many personal data-related privacy violations.”
To demonstrate the feasibility of her approach, she refutes First Amendment objections by showing how the Supreme Court and lower courts have managed to preserve a sphere of action for the appropriation tort. They have done so in part by carving out exceptions to liability for uses of individual images or likenesses deployed in reporting on newsworthy topics. Although the nuances of this topic could arguably occupy an article unto itself, she provides a sensible argument for how states can regulate data exploitation while preserving a sphere for uses of public information and protection of public discourse.
Data as Likeness is a timely and thought-provoking contribution to an area of law that will only grow more important as technology becomes even more embedded in our lives. Takhshid’s article makes a cogent argument for treating personally identifiable information as a form of persona whose appropriation can constitute a cognizable harm. Like all good articles, it raises yet more questions, and I hope she will address them in future work. Most pressing, from my perspective, is whether mere data collection – as opposed to collection and transmission – should be treated as an appropriation of one’s online persona. Although defendants’ conduct may be wrongful and should be deterred, my experience with the weaponization of defamation law makes me leery of compensating plaintiffs without proof of harm. The proof need not be stringent, but proof requirements help ground even dignitary harm torts such as appropriation in real wrongs.






