Wednesday, December 17, 2025 | 11:14 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Delhi HC dismisses a petition for 'fabricated, AI-generated content'

In a dispute over delayed possession of flats in Gurugram, GWA had filed a petition and attempted to support it by quoting old judgments

AI, ARTIFICIAL INTELLIGENCE

AI hallucination refers to a situation where an AI system produces information that is false, fabricated, or not based on real data, but presents it as if it were true.

Rishika Agarwal New Delhi

Listen to This Article

Similar to humans, artificial intelligence (AI) also has its limitations. The case of AI hallucination became starkly clear when the Delhi High Court (HC) dismissed a petition citing “fabricated” and “AI-generated” content.
 
In a dispute over delayed possession of flats in Gurugram, GWA had filed a petition and attempted to support it by quoting paragraph 74 of an old judgment, The Indian Express reported. However, the judgment contained only 27 paragraphs.

Wrong citations

The homebuyers highlighted these mistakes in an eight-page note, detailing all the AI-generated and fabricated references cited in the petition.
 
Citation 1: Chitra Narain v DDA, 2008 (87) DLT 276
 
  • There is no such judgment, citation, or text in the ruling. All the quotes attributed to it are false and fabricated, the homebuyers said.
 
Citation 2: 73rd and 74th paragraphs of Raj Narain v Indira Nehru Gandhi (1972) 3 SCC 850
  • The judgment contains only 27 paragraphs, so paragraphs 73 and 74 do not exist. The portions quoted in the petition are entirely fabricated.

Petition dismissed for AI content

Following this, the HC officially dismissed the petition, noting that all the citations by GWA were false, fabricated, and AI-generated, possibly making it the first instance where a petition was withdrawn due to entirely fabricated citations.

What is AI hallucination?

AI hallucination refers to a situation where an AI system produces information that is false, fabricated, or not based on real data, but presents it as if it were true. 
 
These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.

Examples of AI hallucinations

AI hallucinations can pose serious risks when used to make critical decisions, such as in medical diagnoses or financial trading. According to Google, AI systems may sometimes predict events that are unlikely or fail to flag issues they are trained to detect. For example:
  • A model designed to predict the weather may forecast rain tomorrow even when no rain is expected
  • A model used to detect fraud may incorrectly flag a legitimate transaction as fraudulent
  • A model intended to detect cancer may fail to identify a cancerous tumour.

How to address AI limitations

AI has its limits, but there are ways to make it work better. Hallucinations can be addressed by narrowing down possible answers, giving clear instructions, starting fresh, or using a simple template.
 
When using AI, it’s important to tell it exactly what you want and don’t want. This can be done by providing feedback. For example, if the AI is generating text, it can be pointed out which parts are not needed. This helps the AI understand what a person is looking for and improve its responses.
 
 
 
 

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Sep 26 2025 | 1:00 PM IST

Explore News