The Ghost in the Press Release

The Ghost in the Press Release

Sarah sat in a glass-walled conference room in midtown Manhattan, staring at a cursor that refused to move. Outside, the city hummed with the frantic energy of a Tuesday morning, but inside, the air felt thick with a specific kind of dread. Sarah is a PR director for a mid-sized tech firm. Her job, stripped of the fancy titles, is to tell the truth in a way that people actually want to hear it. But lately, the truth has become a casualty of the very tools designed to help her speak.

Her company had just integrated a sophisticated generative model into their external communications workflow. The promise was efficiency. The reality was a soul-crushing sameness. Every draft the machine spat out sounded like a corporate brochure written by a robot trying to pass as a human—and failing. It used words like "innovative" and "synergy" with a frequency that felt like a physical assault on the English language.

The AI PR problem isn't about the technology being bad. It’s about the technology being too "perfectly" mediocre.

When we talk about artificial intelligence in public relations, we usually focus on the wrong things. We worry about deepfakes or the loss of entry-level jobs. Those are valid fears, but they miss the quiet erosion occurring in our daily interactions. We are currently witnessing the Great Flattening of human voice.

The Mirror of Uncanny Valley

In the 1970s, roboticist Masahiro Mori coined the term "Uncanny Valley." It describes the revulsion humans feel when a robot looks almost, but not quite, like a human. We are currently experiencing the linguistic version of that valley.

When Sarah sends an email to a journalist that was clearly drafted by a large language model, the recipient doesn't just ignore it. They feel a subtle, subconscious prickle of resentment. It’s the feeling of being talked at by a mathematical probability engine rather than talked with by a person.

The data confirms this friction. Recent studies in consumer psychology suggest that when people perceive a message as "automated," their trust in the content drops by nearly 40 percent. It doesn't matter if the facts are 100 percent accurate. If the delivery feels synthetic, the brain flags it as "junk mail for the soul."

Consider a hypothetical scenario: A company suffers a minor data breach.
The AI-generated response: "We prioritize user privacy and are implementing robust measures to ensure the security of our infrastructure."
The human response: "We messed up. We found a hole in the fence, we’ve patched it, and we’re staying up all night to make sure your data is where it belongs."

The first one is a shield. The second one is a bridge.

The Math of Boredom

To understand why this is happening, we have to look at how these systems actually function. They are not "thinking." They are predicting the next most likely word based on a massive corpus of existing text. If the internet is 90 percent corporate jargon and filler, the AI will produce 90 percent corporate jargon and filler.

It is a feedback loop of dullness.

When every brand uses the same models to write their "About Us" pages, every brand starts to sound like the same faceless entity. We are losing the edges. The quirks, the regional dialects, the weird metaphors that don't quite make sense but somehow land perfectly—these are the things that make us pay attention.

I remember a campaign from a small coffee roaster in Oregon. They didn't have a high-tech PR firm. They had a founder who wrote a weekly newsletter about how he accidentally dropped a pallet of beans and discovered a new roasting profile. It was messy. It was funny. It was human. No AI would ever suggest writing about a mistake as a way to build a brand, because the "statistically probable" path to success is to project perfection.

But perfection is boring.

The Crisis of Authenticity

We live in an era where "authentic" was literally the word of the year for major dictionaries. Why? Because it’s the one thing we can't manufacture.

The invisible stakes of the AI PR problem are found in our dwindling attention spans. We have developed a "synthetic scent" detector. Like a dog smelling fear, we can sense when a LinkedIn post or a news article has been sanitized by a model. We skim. We scroll. We delete.

If everyone is using the same "efficient" tools to create content, the volume of content increases while the value of each individual piece plummets toward zero. It is a race to the bottom of the feed.

Sarah eventually closed her laptop. She realized she couldn't fix the draft by editing it. She had to start over. She thought about the actual engineers who built the product she was supposed to be announcing. She thought about the late nights they spent eating cold pizza and arguing over a specific line of code.

She wrote about the pizza.

She wrote about the argument.

She wrote about why the code actually mattered to a person sitting at home trying to manage their finances.

The Cost of Convenience

The temptation to hit "generate" is powerful. It saves hours. It eliminates the pain of the blank page. But convenience has a high price tag: the loss of influence.

Influence is built on a foundation of shared experience. When an AI writes a speech for a CEO, it can't draw on the memory of the company's first $1,000 sale. It can't recall the specific look on a customer's face when a product changed their life. It can only mimic the idea of those things.

We are currently flooded with "content," but we are starving for "context."

The industry is at a crossroads. We can choose to use these tools as a basic scaffolding—a way to organize thoughts or check grammar—or we can let them become the voice itself. If we choose the latter, we are essentially announcing that we have nothing original to say.

The Survival of the Weird

The path out of this mess isn't to ban the technology. That's impossible and, frankly, counterproductive. The path out is to become more aggressively human.

In a world of infinite, free, perfect text, the "imperfect" becomes the premium product. The handwritten note, the raw video, the opinion that is slightly controversial or deeply personal—these are the assets that will hold value.

Sarah’s "pizza and code" press release went out the next morning. It didn't get picked up by every major outlet, but it did get a personal reply from a senior editor at a major tech publication.

"Finally," the editor wrote. "A real person wrote this. Let’s talk."

We are currently building the architecture of our future communications. If we build it entirely out of the smooth, gray concrete of algorithmic probability, we shouldn't be surprised when we find ourselves living in a city where every building looks the same and no one wants to walk the streets.

The ghost in the press release isn't the AI. It's us. We are the ones haunting our own words, hiding behind the safety of the "suggested edit" because we are afraid to be seen as flawed.

But the flaw is the feature.

The crack in the voice is how the listener knows the singer is real.

If you want to be heard in a room where everyone is shouting with a megaphone, sometimes the most effective thing you can do is whisper.

The cursor on Sarah's screen is still blinking, but she isn't afraid of it anymore. She knows that as long as she has a story that hurts, or heals, or makes someone laugh, she has something the most powerful server farm in the world can't touch.

She has a pulse.

KF

Kenji Flores

Kenji Flores has built a reputation for clear, engaging writing that transforms complex subjects into stories readers can connect with and understand.