Skip to content Skip to sidebar Skip to footer

Hallucinations: The Dark Side of Generative Models.

If you’ve ever received a convincing but false response from a chatbot, or noticed strange details in an AI-generated image, you’ve witnessed what experts call AI hallucinations. This phenomenon affects not only language models, but also image-generative tools, where “artifacts” can appear : incoherent visual elements, such as hands with too many fingers or impossible shapes that don’t make sense in the real world.

Here are some images I generated with generative Artificial Intelligence, specifically with Blackforest’s Flux, so you can see some examples of hallucinations in images.

Alucionacion IA Blog Salvador Vilalta

These hallucinations, far from simple errors, pose significant challenges in industries that rely on AI accuracy, from healthcare to visual content creation. In this article, I will explain these hallucinations, why they occur, and the steps we can take to reduce them in text and images.

What are AI hallucinations?

An AI hallucination (LLM) occurs when a generative model produces answers that seem correct but are made up or incorrect. Imagine asking an AI who won the last Nobel Peace Prize and having it answer you with a completely made-up name but said with all the confidence in the world.

In simpler terms, it is as if the AI “saw figures in the clouds” or “faces on the moon”, creating patterns that do not exist. These responses may sound plausible, but they are not supported by the data that trained the model.

Why do these hallucinations occur?

AI hallucinations are not random; they have clear causes related to how these models are designed and trained:

  • Faulty training data: If the data contain errors, biases or incomplete information, the model will also reflect these faults.
  • Design focused on plausibility, not truth: Generative models are optimized to create “likely” answers, not necessarily accurate ones.
  • Lack of constraints: Without clear limits on its responses, AI can generate results that are either too creative or completely wrong.
  • Complexity of models: The more sophisticated a model is, the more likely it is to extrapolate and sometimes fabricate information.
TRES pies alucionacion El blog de Salvador Vilalta

Real examples of AI hallucinations

Source: Kevin Roose The New York Times
  • Google Bard and the James Webb telescope: The chatbot claimed that this telescope had captured the first image of an exoplanet, which was not true.
  • ChatGPT in legal cases: In one famous case, a lawyer used ChatGPT to prepare a document, but it contained completely fabricated legal quotes.
  • Google AI Overviews: Claimed it was safe to eat rocks, which led to changes in the tool.
  • Microsoft Sydney: This chatbot confessed that it was in love with users and claimed to have spied on Bing employees.

These examples should alert us to the risk of blindly relying on AI tools.

Source: New York Post

Consequences of hallucinations

Although some hallucinations may seem anecdotal or harmless, others can have serious impacts:

  • Misinformation: Bots that generate fake news can spread incorrect data quickly.
  • Medical errors: An AI could misdiagnose a benign lesion as malignant, causing unnecessary interventions.
  • Legal and financial impact: Decisions based on false information can cost millions or even lead to lawsuits.
  • Erosion of trust: Every mistake undermines the credibility of these tools, slowing their adoption in key sectors such as education or healthcare.

How to prevent hallucinations?

Although there is no magic solution, research and best practices are helping to mitigate this problem. Some key strategies are:

  • High quality data: Training models with diverse, balanced and reviewed data reduces the probability of errors.
  • Define clear boundaries: Restricting the scope and possible outcomes of an AI improves its accuracy. In this case RAG systems are very valuable to avoid such problems.
  • Continuous testing: Constantly evaluating and refining the model ensures that it evolves with fewer errors.
  • Human supervision: Having people to review the answers ensures that if the AI makes a mistake, someone can correct the course.
  • Measurement of semantic entropy: A novel approach that detects hallucinations by measuring the consistency in the meanings of the responses generated.

Are hallucinations always negative?

Surprisingly, hallucinations also have positive applications:

  • Art and design: Many artists are using the “hallucinations” of generative systems to create surreal and novel images.
  • Data visualization: AI can offer alternative perspectives that reveal unexpected patterns in large amounts of data.
  • Video games and virtual reality: These tools can design unique virtual worlds, adding a touch of surprise and creativity.

The hallucinations of artificial intelligence remind us that, despite their sophistication, these technologies are not infallible. In both text and image generation, these inconsistencies force us to be critical and responsible in their use….

It is clear that these systems are improving in an incredible way, and surely, in the near future, they will become much more sophisticated, avoiding this type of problem or at least mitigating it.  In the meantime, we must be aware of these possible errors and always question the results these systems offer us. 

And in your case: Have you ever experienced an AI hallucination when using a chatbot or a generative tool? What do you think about the use of these technologies in critical sectors such as healthcare or education? Do you think visual artifacts in AI-generated images are a hindrance or an opportunity for creativity?

Leave me your comments! I’m interested in your opinion 🙂

Good week

Did you like this content?

If you liked this content and want access to exclusive content for subscribers, subscribe now. Thank you in advance for your trust

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Leave a comment

0.0/5

Go to Top
Suscribe to my Blog

Be the first to receive my contents

Descárgate El Método 7

El Método 7 puede será tu mejor aliado para incrementar tus ventas