Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

Recent reports suggest that AI hallucinations—instances where AI generates false or misleading information—are becoming more frequent and present growing challenges for businesses and consumers alike who rely on these technologies.  More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information.

A groundbreaking study featured in the PHARE (Pervasive Hallucination Assessment in Robust Evaluation) dataset has revealed that AI hallucinations are not only persistent but potentially increasing in frequency across leading language models. The research, published on Hugging Face, evaluated multiple large language models (LLMs) including GPT-4, Claude, and Llama models across various knowledge domains.

“We’re seeing a concerning trend where even as these models advance in capability, their propensity to hallucinate remains stubbornly present,” notes the PHARE analysis. The comprehensive benchmark tested models across 37 knowledge categories, revealing that hallucination rates varied significantly by domain, with some models demonstrating hallucination rates exceeding 30% in specialized fields.

Hallucinations are when AI bots produce fabricated information and present it as fact.                                                                            Photo Credit: More SOPA Images/LightRocket via Getty Images

Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive.

“That will never go away,” he said.   These AI bots do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79%.

Amr Awadallah, the chief executive of Vectara, which builds A.I. tools for businesses, believes A.I. “hallucinations” will persist.Credit…Photo credit: Cayce Clifford for The New York Times

AI companies like OpenAI, Google, and DeepSeek have introduced reasoning models designed to improve logical thinking, but these models have shown higher hallucination rates compared to previous versions. For more than two years, those companies steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.

For example, OpenAI’s latest models (o3 and o4-mini) have hallucination rates ranging from 33% to 79%, depending on the type of question asked. This is significantly higher than earlier models, which had lower error rates. Experts are still investigating why this is happening. Some believe that the complex reasoning processes in newer AI models may introduce more opportunities for errors.

Others suggest that the way these models are trained might be amplifying inaccuracies. For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems. Despite efforts to reduce hallucinations, AI researchers acknowledge that hallucinations may never fully disappear. This raises concerns for applications where accuracy is critical, such as legal, medical, and customer service AI systems.

The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.  Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.

“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”

For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.

The company found that o3 — its most powerful system — hallucinated 33% of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.

When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51% and 79%. The previous system, o1, hallucinated 44% of the time.

In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do.

“Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”

Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.

Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek.

Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information.  Vectara’s original research estimated that in this situation chatbots made up information at least 3% of the time and sometimes as much as 27%.

In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2% range.  Others, such as the San Francisco start-up Anthropic, hovered around 4%. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3% of the time. OpenAI’s o3 climbed to 6.8%.

Sarah Schwettmann, co-founder of Transluce, said that o3’s hallucination rate may make it less useful than it otherwise would be.  Kian Katanforoosh, a Stanford adjunct professor and CEO of the upskilling startup Workera, told TechCrunch that his team is already testing o3 in their coding workflows, and that they’ve found it to be a step above the competition. However, Katanforoosh says that o3 tends to hallucinate broken website links. The model will supply a link that, when clicked, doesn’t work.

AI companies are now leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.

“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.

Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.

The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.

“What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic.

New research highlighted by TechCrunch indicates that user behavior may exacerbate the problem. When users request shorter answers from AI chatbots, hallucination rates actually increase rather than decrease. “The pressure to be concise seems to force these models to cut corners on accuracy,” the TechCrunch article explains, challenging the common assumption that brevity leads to greater precision.

References:

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

The Confidence Paradox: Why AI Hallucinations Are Getting Worse, Not Better

https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/

https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/

Goldman Sachs: Big 3 China telecom operators are the biggest beneficiaries of China’s AI boom via DeepSeek models; China Mobile’s ‘AI+NETWORK’ strategy

Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

Does AI change the business case for cloud networking?

Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent

Ericsson’s sales rose for the first time in 8 quarters; mobile networks need an AI boost

FT: New benchmarks for Gen AI models; Neocloud groups leverage Nvidia chips to borrow >$11B

The Financial Times reports that technology companies are rush­ing to redesign how they test and eval­u­ate their Gen AI mod­els, as cur­rent AI bench­marks appear to be inadequate.  AI benchmarks are used to assess how well an AI model can generate content that is coherent, relevant, and creative. This can include generating text, images, music, or any other form of content.

OpenAI, Microsoft, Meta and Anthropic have all recently announced plans to build AI agents that can execute tasks for humans autonom­ously on their behalf. To do this effect­ively, the AI sys­tems must be able to per­form increas­ingly com­plex actions, using reas­on­ing and plan­ning.

Cur­rent pub­lic AI bench­marks — Hel­laswag and MMLU — use mul­tiple-choice ques­tions to assess com­mon sense and know­ledge across vari­ous top­ics. However, research­ers argue this method is now becom­ing redund­ant and mod­els need more com­plex prob­lems.

“We are get­ting to the era where a lot of the human-writ­ten tests are no longer suf­fi­cient as a good baro­meter for how cap­able the mod­els are,” said Mark Chen, senior vice-pres­id­ent of research at OpenAI. “That cre­ates a new chal­lenge for us as a research world.”

The SWE Veri­fied benchmark was updated in August to bet­ter eval­u­ate autonom­ous sys­tems based on feed­back from com­pan­ies, includ­ing OpenAI. It uses real-world soft­ware prob­lems sourced from the developer plat­form Git­Hub and involves sup­ply­ing the AI agent with a code repos­it­ory and an engin­eer­ing issue, ask­ing them to fix it. The tasks require reas­on­ing to com­plete.

“It is a lot more chal­len­ging [with agen­tic sys­tems] because you need to con­nect those sys­tems to lots of extra tools,” said Jared Kaplan, chief sci­ence officer at Anthropic.

“You have to basic­ally cre­ate a whole sand­box envir­on­ment for them to play in. It is not as simple as just provid­ing a prompt, see­ing what the com­ple­tion is and then eval­u­at­ing that.”

Another import­ant factor when con­duct­ing more advanced tests is to make sure the bench­mark ques­tions are kept out of the pub­lic domain, in order to ensure the mod­els do not effect­ively “cheat” by gen­er­at­ing the answers from train­ing data, rather than solv­ing the prob­lem.

The need for new bench­marks has also led to efforts by external organ­iza­tions. In Septem­ber, the start-up Scale AI announced a project called “Human­ity’s Last Exam”, which crowd­sourced com­plex ques­tions from experts across dif­fer­ent dis­cip­lines that required abstract reas­on­ing to com­plete.

Meanwhile, the Financial Times recently reported that Wall Street’s largest financial institutions had loaned more than $11bn to “neocloud” groups, backed by their possession of Nvidia’s AI GPU chips. These companies include names such as CoreWeave, Crusoe and Lambda, and provide cloud computing services to tech businesses building AI products. They have acquired tens of thousands of Nvidia’s graphics processing units (GPUs) through partnerships with the chipmaker. With capital expenditure on data centres surging, in the rush to develop AI models, the Nvidia’s AI GPU chips have become a precious commodity.

Nvidia’s chips have become a precious commodity in the ongoing race to develop AI models © Marlena Sloss/Bloomberg

…………………………………………………………………………………………………………………………………

The $3tn tech group’s allocation of chips to neocloud groups has given confidence to Wall Street lenders to lend billions of dollars to the companies that are then used to buy more Nvidia chips. Nvidia is itself an investor in neocloud companies that in turn are among its largest customers. Critics have questioned the ongoing value of the collateralised chips as new advanced versions come to market — or if the current high spending on AI begins to retract. “The lenders all coming in push the story that you can borrow against these chips and add to the frenzy that you need to get in now,” said Nate Koppikar, a short seller at hedge fund Orso Partners. “But chips are a depreciating, not appreciating, asset.”

References:

https://www.ft.com/content/866ad6e9-f8fe-451f-9b00-cb9f638c7c59

https://www.ft.com/content/fb996508-c4df-4fc8-b3c0-2a638bb96c19

https://www.ft.com/content/41bfacb8-4d1e-4f25-bc60-75bf557f1f21

Tata Consultancy Services: Critical role of Gen AI in 5G; 5G private networks and enterprise use cases

Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC

AI adoption to accelerate growth in the $215 billion Data Center market

AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms

AI winner Nvidia faces competition with new super chip delayed