
An AI hallucination is a situation where an artificial intelligent tool delivers output that is inaccurate, misleading or incoherent, due to its algorithms finding patterns in data that don’t exist or interpreting those patterns incorrectly.
As the capabilities and popularity of artificial intelligence have expanded over the last couple of years, some of its flaws and vulnerabilities have been uncovered.
One of the biggest questions people have is whether AI is accurate. In many cases, it’s proved to be an incredibly useful tool for fact-checking and researching information, but in some others, the results it has delivered have been incorrect or misleading.
Given the range of use cases that AI is being applied to in the modern world, the consequences of these inaccuracies can be extremely severe. In this article, we’ll look at why an AI hallucination can happen, the ramifications from technological and societal standpoints, and what you can do to minimize the risk of AI hallucinations in your own use.
How does an AI hallucination happen?
There are several different reasons why an AI hallucination happens, and in many cases, it comes down to a combination of several of them at the same time. These can include (and are not necessarily limited to):
- Not having enough training data to guide comprehensive, accurate results from the AI model.
- Having too much training data, which leads to too much irrelevant ‘data noise’ being confused with the information that’s relevant and important.
- Biases within the data which are reflected in generated results.
- The AI model simply making the wrong assumptions and conclusions from the information it’s been fed.
- A lack of real-world context within the AI model, such as physical properties of objects or wider information that is relevant to the results being generated.
What does an AI hallucination look like?
There is no single set of symptoms for AI hallucinations because it depends on the flaws in the model and the process involved. Typically, however, an AI hallucination can manifest itself in one of these five ways:
- Inaccurate predictions: AI models may end up predicting something will happen in the future, which has little realistic chance of occurring, or probably no chance at all.
- Summaries with missing information: sometimes, AI models might miss out on vital context or information that they would need to create accurate, comprehensive results. This can be through a lack of data fed into the model, or the model’s inability to search for the right context from other sources.
- Summaries with fabricated information: similar to the previous point, some AI models may end up compensating for a lack of accurate information by making things up entirely. This can often happen when the data and context that the model is relying on are inaccurate in the first place.
- False positives and negatives: AI is often used to spot potential risks and threats, whether that’s symptoms of illness in a healthcare setting or cases of fraudulent activity in banking and finance. AI models may sometimes identify a threat that doesn’t exist, or at the other end of the scale, fail to identify a threat that does.
- Incoherent results: if you’ve seen AI-generated images of people with the wrong numbers of arms and legs or cars with too many wheels, then you’ll know that AI can still generate results that don’t make any sense to humans.
Why is it important to avoid AI hallucination?
You may think that an AI hallucination is no big deal and that simply running the data through the model again can solve the problem by generating the right results.
But things aren’t quite as simple as that, and any AI hallucinations that are applied to practical use cases or released into the public domain can have some very severe consequences for large numbers of people:
Unethical use of AI
The use of AI, in general, is under the spotlight at the moment, and organizations making use of the technology are increasingly expected to use AI in a responsible and ethical way that doesn’t harm people or put them at risk. Allowing an AI hallucination to pass through unchecked - either knowingly or unknowingly - would not meet those ethical expectations.
Public and consumer trust
Connected to the previous point, many people are still worried about the use of AI, from how their personal data is used to whether the increasing capabilities of AI may render their jobs obsolete. Continued cases of AI hallucination examples in the public domain may erode the slowly building trust among the public, and lead to limited success for AI use cases and businesses in the long-term.
Misinformed decision-making
Businesses and people need to be able to make the best, most informed decisions possible and are increasingly leaning on data, analytics, and AI models to remove the guesswork and uncertainty from those decisions. If they’re misled by inaccurate results from AI models, then the wrong decisions they make could have catastrophic results, from threatening the profitability of a business to misdiagnosing a medical patient.
Legal and financial risks of AI misinformation
As the court case mentioned above ably demonstrates, inaccurate AI-generated information can cause great harm from legal and financial perspectives. For example, content created using AI could be defamatory towards certain people or businesses, could be in breach of certain legal regulations, or some extreme cases even suggest or incite people to conduct illegal activities.
Avoiding bias
We live in a world where people are working tirelessly to ensure that everyone is treated equally and without bias towards one type of person over another. However, biased AI data can lead to many of those prejudices being reinforced, often unintentionally. A good example of this is the use of AI in hiring and recruitment: AI hallucinations can lead to biased results that can impact the diversity, equality, and inclusion efforts of the organization.
What are some typical AI hallucination examples?
Avoiding AI hallucinations is proving to be a challenging task for everyone in the industry. And it doesn’t only happen with smaller operations that don’t have the expertise and the resources. These three AI hallucination examples prove that they’re happening to some of the biggest tech players in the world:
Meta AI and the Donald Trump assassination attempt
In the aftermath of the assassination attempt against then-presidential candidate Donald Trump in July 2024, Meta’s AI chatbot initially refused to answer any questions about the incident and then later claimed that the incident never happened. The issue led to Meta adjusting the algorithms of its AI tool but led to public claims of bias and the censoring of conservative viewpoints.
The ChatGPT hallucination and the fake legal research
In 2023, a man in Colombia submitted a personal injury claim against an airline. His lawyers used the leading AI tool ChatGPT for the first time to compile his case and prepare legal submissions. However, despite the reassurances of ChatGPT that the six cases of legal precedents it had found were real, none of them existed.
Microsoft’s Sydney falling in love with users
Sydney, Microsoft’s AI-powered chatbot, was reported to have told a technology columnist at the New York Times that it loved him and that he should leave his wife to be with it instead. Over the course of two hours, Kevin Roose said that Sydney shared with him some “dark fantasies” about spreading AI misinformation and becoming human.
What can be done to minimize the risk of AI hallucination?
Given the importance of avoiding the risk of an AI hallucination, it’s up to the people using AI models to take all the practical steps they can to mitigate any of the circumstances that can lead to issues. We recommend the following:
Ensure there is a clear purpose to the AI model
As AI use has expanded in recent years, one common mistake is for organizations to use AI models for the sake of using them, without any consideration of the output they’re looking for. Clearly defining the overall objective of using an AI model can ensure that results are focused and avoid the risk of an AI hallucination through an approach and data that is too general.
Improve the quality of training data
The better the quality of data that goes into an AI model, the better the quality of the results that will come out of it. A good AI model will be based on data that is relevant, free of bias, well-structured, and has had any extraneous ‘data noise’ filtered out. This is essential for making sure that the results generated are accurate, in the right context, and won’t introduce further problems.
Create and use data templates
A good way of ensuring that the outcomes of an AI model are closely aligned with their intended purpose is to use templates for the data fed into them. This ensures that each time an AI model is used, it gets used to data being provided in the same consistent manner and can deliver consistent, accurate results in the right context.
Limit the range of responses and outcomes
Putting more constraints on an AI model can help narrow down the potential outcomes towards those that are needed. This is where filtering tools and thresholds come into play, giving AI models some much-needed boundaries to keep their analysis and generation consistently on the right track.
Continually test and improve the model
Just as continuous improvement is vital for good software development in a constantly changing world, the same is true of a good AI model. Therefore, all AI models should be tested and refined regularly so that they can be recalibrated to evolutions in data, requirements, and the contextual information available.
Put human checks and balances in place
AI is not yet infallible to the point that it can be trusted to operate completely autonomously, so ensuring there is at least some human oversight in place is essential. Having a person check AI output can identify any AI hallucinations that have taken place and ensure the output is accurate and suitable for its stated requirements.
Strengthen your cybersecurity provision
If an AI hallucination is at risk of introducing cybersecurity vulnerabilities, then this is a good reason to ensure the best possible cybersecurity solution is in place. Kaspersky Plus Internet Security includes real-time anti-virus scanning as standard so that any security threats introduced because of AI hallucinations are addressed and eliminated before they can have any adverse effects.
Related Articles:
