This post was originally published on this site.

AI detection startup GPTZero scanned all 4,841 papers accepted by the prestigious Conference on Neural Information Processing Systems (NeurIPS), which took place last month in San Diego. The company found 100 hallucinated citations across 51 papers that it confirmed as fake, the company tells TechCrunch.
Having a paper accepted by NeurIPS is a résumé-worthy achievement in the world of AI. Given that these are the leading minds of AI research, one might assume they would use LLMs for the catastrophically boring task of writing citations.
So caveats abound with this finding: 100 confirmed hallucinated citations across 51 papers is not statistically significant. Each paper has dozens of citations. So out of tens of thousands of citations, this is, statistically, zero.
It’s also important to note that an inaccurate citation doesn’t negate the paper’s research. As NeurIPS told Fortune, which was first to report on GPTZero’s research, “Even if 1.1% of the papers have one or more incorrect references due to the use of LLMs, the content of the papers themselves [is] not necessarily invalidated.”
But having said all that, a faked citation is not a nothing, either. NeurIPS prides itself on its “rigorous scholarly publishing in machine learning and artificial intelligence,” it says. And each paper is peer-reviewed by multiple people who are instructed to flag hallucinations.
Citations are also a sort of currency for researchers. They are used as a career metric to show how influential a researcher’s work is among their peers. When AI makes them up, it waters down their value.
No one can fault the peer reviewers for not catching a few AI-fabricated citations given the sheer volume involved. GPTZero is also quick to point this out. The goal of the exercise was to offer specific data on how AI slop sneaks in via “a submission tsunami” that has “strained these conferences’ review pipelines to the breaking point,” the startup says in its report. GPTZero even points to a May 2025 paper called “The AI Conference Peer Review Crisis” that discussed the problem at premiere conferences, including NeurIPS.
Techcrunch event
San Francisco
|
October 13-15, 2026
Still, why couldn’t the researchers themselves fact-check the LLM’s work for accuracy? Surely they must know the actual list of papers they used for their work.
What the whole thing really points to is one big, ironic takeaway: If the world’s leading AI experts, with their reputations at stake, can’t ensure their LLM usage is accurate in the details, what does that mean for the rest of us?




