From TikTok deepfakes to White House smears, fake videos based on Black archetypes are running rampant, putting Black users at risk.

Late last year, when the U.S. government shutdown disrupted the SNAP food assistance program, which low-income families rely on to buy groceries, videos on social media depicted the aftermath in frantic scenes. “I’ll be honest with you,” said a Black woman in a viral TikTok post, “I get over $2,500 a month in coupons. I sell them, the equivalent of $2,000, for about $1,200 to $1,500 in cash.” Another black woman complained about taxpayers’ responsibility for her seven children from seven different relationships, and a third had a nervous breakdown after her food stamps were refused at a fast-food restaurant.

Visible watermarks identified some videos as AI-generated – apparently too faint for the racist commenters and freeloaders who enthusiastically believed the frenzy was real. “There are people treating this like a gig, selling watermarks, abusing the system,” complained conservative commentator Amir Odom. Fox News reported the Snapchat deepfakes as if they were authentic, before publishing a correction. Newsmax host Rob Schmitt said people were using Snapchat “to get their nails done, get extensions and do their hair.” (Amid the outrage, one basic fact has gone unnoticed: White Americans make up 37% of Snapchat’s 42 million users.)

The fake videos are mere fragments in the growing mosaic of digital blackface, a pattern that has intensified over the past two years with the widespread accessibility of AI generative video tools. “There has been a massive acceleration,” says Safiya Umoja Noble, a professor of gender studies at UCLA and author of “Algorithms of Oppression,” a work that focuses on digital biases against black women in particular. “Digital blackface videos are actually exploiting the same racist and sexist stereotypes and tropes that have been used for centuries.” The end result is a superficial layer of blackness devoid of cultural obligation or responsibility – a minstrel show in a nutshell.

Coined in a 2006 academic article, the term “digital blackface” describes a form of commodification of black culture repurposed for online expression by non-black people. Examples are varied: posts in African-American vernacular English, the use of darker skin emojis, reaction memes featuring Beyoncé, Katt Williams and other black style icons.

“Early research into digital blackface began with white gamers using Bitmojis of a different race and altering their vocabulary to represent themselves,” says Mia Moody, a journalism professor at Baylor University, whose soon-to-be-released book, Blackface Memes, links the role of black users in creating and spreading online trends with subsequent digital blackface. “That’s part of cultural appropriation, the acquisition of cultural capital. Maybe you’re a nerdy white guy, but if you wear a cool avatar of a black guy with dreadlocks, people will respect you. Suddenly you’re interesting.”

During the expansion of memology into short videos, black expression has been increasingly dissociated from authorship, context, or consequences. Internet culture scholars say some online content creators of color use AI-generated avatars modeled after familiar black faces — the beauty influencer, the culture podcaster, the street interviewer; they insert themselves into feeds alongside real Black content creators. Great language models scour digital spaces that have gained prestige from black speech and humor, absorbing their tone and slang. Hume AI is one of many companies offering synthetic voices for podcasts and audiobooks, such as “black woman with slight Louisiana accent” or “middle-aged African-American man with a tone of hard-won wisdom.” In most cases, creators whose speech is pulled from YouTube, podcasts, and social media receive no compensation, much less know that their personalities shaped these role models.

Snapchat reaction videos, however, represented a notable escalation in the popularization of digital blackface – less disguise and more extreme-level stereotyping. Many of these videos were created with OpenAI’s Sora text-to-video app. As Sora grew in popularity in 2025, users exploited its hyperrealism to tarnish the image of Martin Luther King Jr., sparking an ethical debate about “synthetic resurrection.” Deepfakes showed him shoplifting, fighting Malcolm X and swearing during his “I Have a Dream” speech. Conservative influencers flooded social media with AI-generated hugs between King and Charlie Kirk, fusing their conflicting legacies and cultural martyrdom. Bernice King, MLK’s daughter and director of his Atlanta nonprofit, criticized the false advertising as “foolishness.”

Inevitably, the Trump White House jumped on the bandwagon. In January, the official White House account, @WhiteHouseX, published a manipulated photo of Minnesota activist Nekima Levy Armstrong, with dark skin and crying, following her arrest at a peaceful anti-ICE demonstration. Earlier this month, an image depicting the Obamas as monkeys was released by Trump’s own Truth Social account.

Blackface remains in the bowels of American mass media, even as it evolves at a rapid pace. Its roots go back to the minstrel revues of the early 19th century; White artists applied facial paint made from charred corks and exaggerated white lips to caricature black features, enacting exaggerated routines of laziness, clowning and hypersexuality typical of the black population. Thomas D. Rice, a Manhattan playwright, rose to fame in the 1830s playing a gangly scoundrel named Jim Crow—a name that quickly became synonymous with the racial segregation policies imposed in the American South that lasted until the Civil Rights Act of 1964.

At their peak, minstrel shows were the dominant form of American entertainment – ​​reflected in newspaper cartoons and the wildly popular Amos ‘n’ Andy radio shows. After the Civil War, black performers were largely forced to adopt elements of minstrel shows, to the detriment of their individuality, just to gain a foothold on stage. “The objectives were, first, to earn money to help educate younger people and, second, to try to dispel the resentment that existed towards black people,” explained Tom Fletcher, a minstrel and vaudeville actor for almost 70 years, who died in 1954.

Even as the minstrel show declined in the early 20th century, its toxic remnants remained in American culture—from the dragging crows of Disney’s Dumbo, to Ted Danson’s infamous roast with Whoopi Goldberg in 1993, to the annual parade of white Halloween revelers in racial costumes. A decade ago, when the internet was still a kind of black box, researchers like Noble and Joy Buolamwini, from MIT, were already warning about the racial biases inherent in the programming of algorithms related to medical treatments, loan applications, hiring decisions and facial recognition. Now the problem is exposed, spreading wider and deeper than any bad joke ever could.

Tech companies have struggled to stem the tide of digital blackface. Bowing to public backlash, King’s family and other prominent estates, OpenAI, Google and AI image generator Midjourney have banned deepfakes of King and other American icons. In January 2025, Meta deleted two of its own AI characters in blackface – a retiree named Grandpa Brian and Liv, described as a “proud black queer mother” and “truth teller” – following allegations that its development team was not diverse, sparking a firestorm of criticism. Instagram, TikTok and other platforms have made some attempts to remove viral videos of digital blackface, with tentative results. Last summer, efforts to replicate Bigfoot Baddie — the AI ​​avatar of a black woman as a human-yeti hybrid with pink wigs, acrylic nails and hair caps, created by Google’s Veo AI — turned into a social media craze, with some users even creating how-to courses. The avatar is still present on social media.

Black in AI and the Distributed AI Research Institute (Dair) are among the few affinity groups that have advocated for diversity and community participation in building AI models to combat programming bias. The AI ​​Now Institute and the Partnership on AI highlighted the risks of AI systems learning from data from marginalized communities and noted that technology companies could provide mechanisms, such as data opt-out, to help limit harmful or exploitative uses. But large-scale adoption has been extremely slow.

“YouTube alone has around 400 hours of content being uploaded per minute,” says Noble. “With the generation of AI, these technology companies can’t manage what goes into their systems. So they don’t manage it. Or they do what is absolutely essential for the US government. But if you have an authoritarian regime in power, they can use your systems to facilitate propaganda.”

While the precise impact of AI-generated digital blackface is difficult to quantify, its use by the Trump administration highlights its potential as a powerful official disinformation tool. The post on Obama Truth Social revived an insult that has persisted for years in dark corners of the internet, and which rhymes with Trump’s ongoing efforts to tarnish the image of the former first family. (Trump denied direct responsibility and refused to apologize for the post, which was removed.) Meanwhile, the White House’s manipulated image of Armstrong, altered from a real photo taken by the Department of Homeland Security and posted on its official Twitter account, was seen as a psychological operation orchestrated by a government that works closely with technology companies to track activists and others considered enemies of the state.

In addition to spreading prejudice as news, digital blackface exposes black users to a level of personalized abuse and harassment that harks back to the heyday of minstrel shows, when racists had complete freedom to express their prejudice without being questioned. And just like then, there appears to be little that can be done to curb the virulence. “We live in a United States with an open, unrestricted agenda, anti-civil rights, anti-immigrant, anti-black, anti-LGBTQ and contrary to social assistance policies for low-income people,” says Noble. “Finding material to support this position is just a matter of the state distorting reality to suit its imperatives. And this is easily done when all the technology companies align themselves with the White House.”

Still, Moody remains hopeful that the current fascination with digital blackface will soon become as antiquated and unpleasant as the analog variant. After all, she has seen this movie before. “Right now, people are just experimenting with AI technology and having fun to see how far they can take it,” she says. “When we get past this phase, we’ll see less of that. They’ll move on to something else. Or they’ll be applying for a job, and it’ll be embarrassing. Just look at the history.”

Originally published by The Guardian on 19/02/2026

By André Lawrence

Source: https://www.ocafezinho.com/2026/02/19/racismo-digital-floresce-sob-trump-e-a-ia-o-estado-esta-distorcendo-a-realidade/

Leave a Reply