Thought Leaders
Don’t Blame AI for PR’s Credibility Problem

A recent Unite.ai piece examined how AI has transformed PR research – making it faster to collect data, spot trends, and produce media-ready findings, but also harder to guarantee accuracy and trust. That observation captures a real tension in the industry, and it deserves a deeper look. The problem isn’t AI itself; it’s how easily speed can outpace judgment.
AI has certainly made PR faster. But as we know from driving, faster isn’t always the smart way forward.
The technology has condensed what used to be a careful, step-by-step process – designing surveys, cleaning data, validating sources – into something that can happen almost instantly. That compression saves time, but it also removes the natural pauses that once gave us space to double-check and challenge what we found. Without those pauses, accuracy becomes easier to miss. The real risk isn’t that AI will break PR. It’s that we’ll do it ourselves by mistaking acceleration for progress.
The credibility gap in PR isn’t AI’s fault – at least not directly. The gap comes from how quickly AI allows us to move. Every time we publish without verification or treat “faster” as synonymous with “better,” we erode the trust that makes our work matter. Credibility is what gives our work as PR pros weight – when we earn it. Preserving credibility means slowing down enough to question what we publish and making verification part of the process, not an afterthought.
Slow down to speed up
AI has made it effortless to move from an idea to a dataset in record time. What once took days now takes hours — and that acceleration has quietly become an industry reflex. But speed gives us volume, not validity. Journalists don’t care how fast we deliver data; they care whether it holds up. I’ve seen AI tools produce impressive summaries across dozens of articles, but I’ve also watched them invent statistics that sounded plausible but had no real source.
Studies reinforce the need for caution. A JMIR study found that large language models hallucinated – producing false or unverified information – in about 40 percent of GPT-3.5 and 29 percent of GPT-4 outputs, even on fact-based tasks. Likewise, a NewsGuard audit showed AI systems spreading false or misleading claims in roughly one-third of news-related responses. Both findings highlight a simple truth: speed amplifies risk when verification doesn’t keep pace.
That’s why taking extra time to verify isn’t a delay; it’s an investment in credibility. A day spent confirming data, refining context, or pressure-testing the narrative often reveals insights we’d otherwise miss. It can mean the difference between a headline that fades and a story that drives real conversation. Slowing down isn’t about resisting technology. It’s about keeping the human judgment that turns information into something audiences can actually trust.
Keep humans in the loop
AI is great at producing results. But it’s not so great at knowing whether those results make any sense. That’s the core problem. Models can generate survey responses, summarize thousands of articles, and even synthesize insights that look airtight on paper. But AI models don’t understand context, intent, or consequence. A human can.
That mismatch is well documented in AI ethics and reliability discourse. The “hallucination” phenomenon is often traced to how LLMs learn patterns from training data rather than from first principles, which means they can confidently assert things with no grounding. In the PR domain, the risk is especially acute: interface outputs may reflect biases or frame claims in ways that favor narratives rather than facts.
It’s easy to see how one errant “fact” can spiral out of control. Imagine an AI-generated data point making its way into a pitch deck; a percentage that sounds right and supports the story. The client loves it. A reporter quotes it. Then someone checks the source and realizes it was never real. Suddenly, what was intended to position a brand as thoughtful becomes a credibility firestorm.
So “keeping humans in the loop” can’t just be a line in a PowerPoint slide – it has to be how the work actually gets done. Editors, analysts, and domain experts need to be there to ask the uncomfortable questions that make the end product trustworthy. They can catch bias, flag weak framing, and make sure what we put out reflects reality. In other words: AI can move fast, but it still needs a driver who knows when to tap the brakes. Without that judgment, we’re not improving the process; we’re just automating mistakes.
Train for judgement
As AI reshapes the work, the way we train has to change with it. Most comms professionals today are well past the point of learning how to write better prompts. The skill we all need now is judgment – knowing when to trust the output, when to question it, and when to throw it out entirely.
When I coach younger PR pros, I emphasize that AI can write ten versions of a pitch in seconds. Their job isn’t to pick the flashiest one; it’s to find the version that actually sounds like their client, and then make it stronger. That might mean tightening the argument, grounding it in real data, or adding the voice and tone that makes it credible. An AI model can draft copy, but our judgment turns it into communication that’s worth reading.
This shift is already happening. Some agencies are shifting from “prompt engineering” to “credibility editing,” building habits around checking claims, validating sources, and aligning messaging with brand voice. Exercises now include asking: Would I say this to a reporter? Would I put my name on it?
Those simple questions build the reflexes that protect both clients and reputations. And that’s the real goal of AI in PR. Not faster copy, but sharper judgment. Training for judgment raises the standard of thinking and strengthens the trust that makes speed sustainable.
Measure trust, not turnaround
PR pros typically measure performance through metrics like delivery speed, coverage volume, and cost per placement. But in an AI-driven industry, those metrics don’t tell the full story. Output is easy to quantify; credibility isn’t. And yet, that’s what clients and journalists are weighing more heavily than ever.
That difference between quantity and credibility shows up in the data. In one measurement study, human sentiment analysis reached 85 percent accuracy, compared with 59 percent for AI-based methods – a gap that quantifies the role of critical review. It’s not that humans work faster, but that they interpret context, and that’s the same instinct clients trust when they evaluate credibility. If we can measure that difference in accuracy, we can also measure the value of human oversight itself.
The new ROI should measure what actually sustains relationships: trustworthiness, verification rates, and how long any earned coverage continues to drive engagement. Increasingly, clients aren’t asking, “Can we publish this today?” but “Can we stand on this?” Speed matters, but accuracy and confidence are what last.
AI gives us a chance to do both: move faster and think deeper. The real value isn’t in how quickly AI turns out content, but in how it helps us make smarter, more defensible decisions. The work that lasts won’t be the fastest; it’ll be the work people trust. Teams that build that trust into how they measure success will own the future.
A credibility advantage
The credibility crisis in PR isn’t inevitable. It’s a management problem, not a technological one, and the fix is within reach: slow down to verify, keep humans in the loop, train for judgment, and measure trust, not just speed. AI is changing how fast we work, but it can also remind us why we do the work – to inform with accuracy and integrity. The real opportunity now is cultural: to make credibility the metric that matters most.












