Digital Blackface Surges Amid Trump Era and AI Advances, Raising Concerns Over Racial Stereotyping
Essential brief
AI-generated videos and images exploiting Black stereotypes have proliferated on social media, fueled by political actors and technological advances. This resurgence of digital
In recent years, digital blackface—a form of online racial stereotyping where Black cultural expressions are appropriated or mimicked by non-Black individuals—has intensified, particularly with the rise of generative AI tools. Late last year, during a US government shutdown that disrupted SNAP benefits, viral TikTok videos featured AI-generated deepfakes of Black individuals portraying exaggerated and false narratives about food stamp misuse. Despite visible AI watermarks, many viewers accepted these videos as authentic, fueling racist commentary and misinformation. Conservative media outlets initially reported these deepfakes as genuine before issuing corrections.
Experts like UCLA professor Safiya Umoja Noble note a sharp increase in digital blackface content, which recycles longstanding racist and sexist stereotypes. The term, coined in 2006, describes the commodification of Black culture online, ranging from the use of African American Vernacular English to memes featuring Black celebrities. Baylor University’s Mia Moody highlights how non-Black users adopt Black avatars and vernacular to gain social capital, often detaching Black cultural expression from its original context.
AI technologies have further complicated this landscape. Companies such as Hume AI offer synthetic voices modeled on Black identities without compensating the original creators whose speech patterns are scraped from digital content. OpenAI’s text-to-video app Sora was used to create hyperrealistic but false videos, including controversial deepfakes of Martin Luther King Jr. engaging in inappropriate behavior, sparking ethical debates about "synthetic resurrection."
The Trump administration has also leveraged digital blackface tactics. The official White House X account posted a doctored image of Minnesota activist Nekima Levy Armstrong, and Trump’s Truth Social circulated racist imagery targeting the Obamas. These actions underscore how digital blackface has evolved into a tool for political disinformation and racial vilification.
Historically, digital blackface traces back to 19th-century minstrel shows, where white performers caricatured Black people with exaggerated features and behaviors. Though minstrelsy declined in the 20th century, its legacy persists in modern media and online culture.
Tech companies have taken some steps to address these issues. OpenAI, Google, and Midjourney have banned deepfakes of prominent Black figures like Martin Luther King Jr., and platforms like Instagram and TikTok attempt to remove viral digital blackface content. However, the sheer volume of online content and AI-generated media makes enforcement challenging.
Advocacy groups such as Black in AI and the Distributed AI Research Institute call for greater diversity in AI development and community involvement to mitigate bias. The AI Now Institute and Partnership on AI recommend mechanisms like data opt-outs to protect marginalized communities from exploitation.
Despite these efforts, the widespread use of digital blackface, especially by political actors, highlights its potential to perpetuate racial stereotypes and fuel harassment against Black users. As Noble observes, the current political climate in the US—with policies hostile to civil rights and marginalized groups—facilitates the manipulation of digital media to support discriminatory agendas.
Nonetheless, scholars like Moody remain cautiously optimistic that digital blackface will eventually lose its appeal as users move on from initial experimentation with AI technologies. The hope is that, like its analog predecessor, digital blackface will become recognized as outdated and unacceptable.