OpenAI’s Sora App Raises Legal and Ethical Concerns with Deepfake Videos of the Deceased
Essential brief
OpenAI’s Sora App Raises Legal and Ethical Concerns with Deepfake Videos of the Deceased
Key facts
Highlights
OpenAI’s recently launched video app, Sora 2, has quickly gained popularity by allowing users to create highly realistic, short deepfake videos featuring historical figures and deceased celebrities. The app, available by invitation in the US and Canada, reached one million downloads within five days, surpassing ChatGPT’s initial uptake. Users can generate 10-second clips by typing prompts, producing content that ranges from playful to deeply controversial. Unlike other generative AI tools, Sora permits the use of deceased individuals’ likenesses without requiring consent, while living persons must grant permission. This exception for "historical figures" has led to a flood of videos depicting figures like Karl Marx, Martin Luther King Jr., and Princess Diana in often irreverent or offensive scenarios.
Family members of those portrayed have expressed distress and outrage. For example, Ilyasah Shabazz, daughter of Malcolm X, condemned the disrespectful use of her father’s image, and Zelda Williams, daughter of Robin Williams, urged users to stop creating AI videos of her late father. In response to complaints, OpenAI paused the generation of videos featuring Martin Luther King Jr. and is working to strengthen content guardrails. However, no similar public actions have been taken for other figures. The app also includes depictions of more recent deceased celebrities, such as Kobe Bryant and Amy Winehouse, though a cutoff appears to exclude those who died within the last two years.
Legal experts highlight the murky landscape surrounding AI-generated deepfakes of the dead. While living individuals are protected under libel and publicity laws requiring consent for commercial use of their likeness, the deceased generally lack such protections except in a few states like California, New York, and Tennessee. OpenAI’s liability is further complicated by Section 230 of the Communications Decency Act, which may shield the company from responsibility for user-generated content. Yet, critics argue that by promoting videos of historical figures on its homepage, OpenAI may be encouraging this controversial content.
The ethical implications are significant. Experts warn that such AI-generated portrayals risk distorting public memory and disrespecting legacies. The app’s algorithm tends to reward shock value, leading to content that trivializes or mocks revered figures. While OpenAI currently treats Sora as an entertainment platform, the rise of AI influencers monetizing these videos could trigger legal challenges from estates seeking to protect their loved ones’ images.
In response to backlash, OpenAI announced it will allow representatives of recently deceased public figures to request blocking of their likenesses on Sora, though details remain vague. The company also shifted to an opt-in model for copyright holders following infringement claims. Legal scholars predict ongoing challenges and a "Whac-A-Mole" approach to content moderation until clearer federal regulations and court rulings emerge. Ultimately, the Sora controversy underscores a broader societal dilemma: who controls our digital likenesses in the age of synthetic media, and how can respect for the dead be balanced with technological innovation?