What do you do when LLMs spread misinformation about your employer brand?

The new reputation risk

Imagine this: a candidate asks ChatGPT “What is the working atmosphere like at [your company]?” The answer contains an outdated Glassdoor review from 2019, a news report taken out of context, and a factual inaccuracy about your round of layoffs that never took place.

The candidate reads it, draws conclusions, and applies elsewhere. You don't even know this is happening.

Welcome to the age of AI reputation risk. Where misinformation is spread not only by humans, but also by algorithms.

Why this problem is growing

Research published in Nature Communications (2024) reveals a disturbing pattern: 50-90% of LLM-generated citations does not fully support the claims to which they are linked. AI systems “hallucinate”; they generate plausible-sounding but factually incorrect information.

For employer branding, this means:

  • Outdated information continues to circulate (reorganisations from years ago)
  • Negative reviews are disproportionately weighted
  • Factual inaccuracies are presented as facts
  • Context is lost (a critical article becomes a final judgement)

The limitations of correction

Here comes the frustrating reality: you cannot call an LLM to request a correction. There is no “right of rectification” as with traditional media.

Moreover, research shows that human fact-checks are significantly more effective than AI-generated fact-checks. Worse, when AI tries to fact-check itself, it can backfire. It can reduce belief in accurate information when the AI unfairly labels it as false.

Strategies that do work

1. Prevention through content dominance

The best defence is a good attack. Create so much accurate, recent, structured content that it trumps outdated or inaccurate information.

  • Publish regular updates on your culture and employer brand
  • Make sure recent, positive content is easily findable and quotable
  • Update existing pages at least every six months

2. Understanding Retrieval-Augmented Generation (RAG)

Modern AI systems use RAG, retrieving up-to-date information from the web before generating answers. This means that fresh, well-structured content on your own domain directly affects AI answers.

3. Multi-platform consistency

AI systems triangulate information from multiple sources. Inconsistencies between your website, LinkedIn, Glassdoor and press releases create confusion, and confusion leads to inaccurate syntheses.

4. Proactive monitoring

What you don't measure, you can't manage. Set up a monthly audit:

  • Test the same 10 questions in ChatGPT, Claude, and Perplexity
  • Document inaccuracies and their likely source
  • Prioritise correction based on impact

The escalation ladder

Ernst Example Action
Low Outdated but not harmful info Content update on own channels
Medium Negative framing without context Creating counterbalance with positive content
High Factual inaccuracies Direct correction on source pages + new content
Criticism Legally relevant misinformation Legal escalation + PR response

Practical steps

This week:

  • Conduct a “misinformation audit”: ask 10 critical questions about your employer brand in 3 LLMs
  • Document any inaccuracy or outdated information

This month:

  • Identify the sources of problematic information
  • Create a content plan to counter them

This quarter:

  • Implement a structural monitoring cadence
  • Build a “rapid response” protocol for serious cases

The bottomline

In the AI era, reputation management is no longer reactive. You can't wait for a crisis to escalate because the crisis plays out in millions of individual AI conversations that you never see.

The employers who win are those who proactively feed their narrative with accurate, recent, structured content so that AI systems have no room to hallucinate.

Next article

In the next article, we zoom in on niche markets: How to become visible in AI for specific audiences such as developers, finance professionals or healthcare workers and why specificity is your greatest ally.


This article is part of a series on GEO and employer branding.

Sources:

  • Liu, N. et al, “Citation accuracy in large language models,”.” Nature Communications (2024)
  • MIT Media Lab, “Human vs AI Fact-Checking Effectiveness Study” (2024)
  • Gartner, “Managing AI-Generated Misinformation in Enterprise Communications” (2025)