Google just found out the hard way that "oops" doesn't quite cover it when your software starts blasting racial slurs to people's lock screens. On Tuesday, February 24, 2026, the search giant issued a "deeply sorry" apology after its news notification system sent out a push alert that didn't just reference the recent BAFTA controversy—it spelled out the N-word in full.
If you're wondering how a trillion-dollar company lets a hard "r" slip through its filters, you aren't alone. This wasn't a dark web hack. It was an automated system doing exactly what it was programmed to do, which is precisely why it’s so alarming.
The notification that should have never existed
The mess started following the BAFTA Film Awards on Sunday, February 22. During the ceremony, an attendee with Tourette syndrome, John Davidson, had an involuntary vocal tic while actors Michael B. Jordan and Delroy Lindo were on stage. Davidson, whose life inspired the BAFTA-winning film I Swear, has coprolalia—a condition affecting about 10% to 15% of people with Tourette’s that involves involuntary swearing or slurs.
While the incident itself was a complex moment involving disability and trauma, Google’s automated systems handled it with the grace of a sledgehammer. The notification invited users to "See more on [slur]," linking to a Hollywood Reporter article about the event.
Google’s defense? It wasn’t AI.
Blaming the filters instead of the bot
In a move that feels like classic corporate buck-passing, Google told outlets like Variety and The Guardian that this wasn't an "AI error." Instead, they blamed their "safety filters." According to Google, the system recognized a euphemism for the slur on several web pages—likely "the N-word"—and then, for reasons only a developer could love, "accidentally applied" the actual offensive term to the notification text.
Think about that. The system saw a censored version and decided the "correct" move was to uncensor it for the public.
This distinction between "AI" and "system error" is mostly marketing. Whether it’s a Large Language Model or a heuristic-based scraper, the result is the same: a machine made a decision that a human never would. It proves that despite years of "safety" updates, Google’s automated editorial voice is still capable of being incredibly toxic.
A weekend of institutional failure
Google isn't the only one with blood on its hands here. The entire BAFTA timeline is a masterclass in how to mishandle a sensitive situation.
- The BBC's Tape Delay: The BAFTAs aren't live. They air on a two-hour delay. Despite Warner Bros. (the studio behind Jordan and Lindo's film Sinners) reportedly asking for the slur to be edited out immediately, the BBC broadcast it anyway.
- The iPlayer Oversight: Even after the broadcast, the unedited version sat on the BBC iPlayer streaming service for hours.
- The Editorial Double Standard: While the BBC "missed" the slur, they somehow found the time to edit out a "Free Palestine" comment from director Akinola Davies Jr. and a joke about Donald Trump by host Alan Cumming.
When you look at Google’s push alert in that context, it feels less like a freak accident and more like the final link in a chain of institutional indifference.
Why "deeply sorry" isn't enough anymore
We’ve seen this movie before. In 2015, Google Photos famously labeled Black people as "gorillas." In 2020, its Vision AI was caught labeling a handheld thermometer held by a Black person as a "gun" while calling it an "electronic device" when held by a white person.
Every time, we get the same script. An apology. A promise to "work to prevent this from happening again." A quick patch to the safety filters.
But the "See more on" prompt reveals a deeper issue with how Google handles news. The company is obsessed with "summarizing" and "optimizing" content so you don't have to click through to the source. When you automate the delivery of news involving sensitive racial trauma, you're playing with fire. If your "safety filters" can be tricked into generating a slur because they saw a euphemism, then your safety filters are fundamentally broken.
What you should do now
If you're tired of your phone serving up automated vitriol, it's time to take control of your notification settings.
- Audit your Google News alerts: Go into the Google app settings and look at "Notifications." If you have "Trends" or "Top Stories" toggled on, you're at the mercy of whatever the scraper finds.
- Support direct journalism: The push alert linked to a real article by real humans who actually understood the context. Read the source, don't just trust the snippet.
- Demand transparency: Tech companies shouldn't be allowed to hide behind "system errors." We need to know exactly how a filter meant to block a word ended up inserting it.
Don't wait for the next apology. Adjust your settings today and stop letting an algorithm decide what "news" hits your lock screen.