Data quality and GenAI: The business risks of poor data quality for GenAI

Simon Case

Global Head of Data and AI
Data

January 9, 2026

Data quality and GenAI: The business risks of poor data quality for GenAI

Businesses are now starting to gain real value from GenAI. It is being successfully applied to a growing number of valuable use cases, including:

  • Customer service – handling customer queries
  • Customer experience – recommending products and improving search
  • Process automation – for example, creating complex itineraries for holiday customers
  • Employee assistance – such as HR systems that handle questions about HR policy

In all of these examples, the AI or agent relies on high-quality data. For these use cases, data quality is critical. Inaccurate or out-of-date data can have serious consequences.

How poor data quality limits AI impact

In every case, poor data quality reduces the impact of your AI initiatives. In the most benign scenario, poor data simply means that AI fails to improve business processes—or even makes them worse. For example, you might use GenAI to generate better product descriptions. If the AI is not given good information about the products, it may produce an out-of-date description or even hallucinate to fill gaps in the data it has been given.

The commercial and reputational cost of poor data

There will almost always be financial impacts when data quality is poor. Supplying incorrect product information to a recommender engine, for instance, can result in customers being shown out-of-date products, leading to a reduction in basket size.

However, the impacts can be far more severe. Reputational damage is a major risk, particularly for customer-facing use cases. AI has reached the point where it can handle many customer queries, but only if it is using data that is accurate and up to date. Consider a chatbot that provides information about your products. If the documentation it relies on is incorrect, the chatbot will give poor advice. This is likely to result in negative reviews on trust sites and, inevitably, a drop in customer satisfaction.

Legal and security risks when data quality breaks down

In the worst cases, poor data quality can expose organisations to legal challenges. A well-known example is the 2022 Air Canada case, where a chatbot erroneously offered a discount to a passenger. When the airline refused to honour the discount, the passenger sued—and won. The source of the chatbot’s inaccurate response was likely an outdated policy that had previously been published on Air Canada’s website. At one time, the airline allowed customers who had already travelled to request a refund within 90 days of ticket issuance.

Finally, organisations must ensure that underlying data is properly secured. If the wrong data is made available to an AI system, it can become the mechanism through which a data breach occurs. In one example, WotNot—a maker of AI chatbots—made the data of over 3,000 customers available via a chatbot, including identification documents and medical data. The breach was caused by a poorly secured cloud data bucket—an issue that is straightforward to prevent. While WotNot appears to have avoided legal consequences so far, the reputational damage is clear, and in most jurisdictions the consequences of leaking personal data would be far more severe.

You may also like

Blog

What does ‘data quality’ mean in GenAI?

Blog

Crafting quality in data pipelines

Blog

What are the benefits of data pipelines?

Blog

What is a data pipeline?

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.