A growing concern in the realm of environmental governance is not the usual suspects of pollution or deforestation, but rather the rise of “phantom science.” This term refers to seemingly legitimate scientific references that, upon closer inspection, turn out to be completely fabricated.
While perusing a government report on environmental actions, I stumbled upon several citations that piqued my interest. However, attempts to verify these sources revealed a troubling truth: they didn’t exist. The provided links directed me to unrelated scientific papers, and further searches through Google Scholar and Google AI only yielded more non-existent citations. The entire report, which was foundational to a significant project claiming no environmental harm, appeared to be riddled with these phantom references. It raised the suspicion that the report was crafted using AI.
AI systems, which are increasingly employed across academia, law, and government for their efficiency and cost-effectiveness, have a notable flaw: they can generate entirely fictional but plausible scientific references. These “phantom citations” are increasingly infiltrating official documents and influencing decision-making processes.
The Prevalence of AI Hallucinations
Concerns about AI hallucinations—confidently presented false information—are no longer theoretical. They have been empirically documented across various fields. The issue becomes more pronounced when AI is tasked with supporting pre-determined conclusions, leading to the invention of sources to bolster specific points (source).
From Classroom to Policy Crisis
The issue extends beyond academic papers to government reports and key policy documents. These are not minor errors but represent significant failures in evidence-based policymaking. Once a fabricated citation is incorporated into an official report, it gains undue legitimacy and can perpetuate misinformation through subsequent citations.
“So tell me what you want, what you really, really want.”
The Spice Girls
This issue is not solely about technology but also about the incentives driving its use. Government agencies face pressure to justify policy positions swiftly and economically, making generative AI a tempting tool. AI is adept at creating convincing narratives but does not necessarily align with truth. In today’s political environment, a plausible fabrication can sometimes be more advantageous than factual accuracy.
“You want the truth? You can’t handle the truth!”
A Few Good Men
The legal system is also grappling with this issue, with numerous cases of lawyers submitting AI-generated filings containing fabricated case law (source). The consequences have been severe, with courts issuing sanctions and fines for such professional misconduct. Translating this standard to environmental governance raises questions about the integrity of documents that rely on nonexistent studies or fabricated data.
“You shall not pass!”
Lord of the Rings
Environmental NGOs, watchdogs, and journalists are urged to scrutinize AI-generated government reports and policy documents. They must challenge decisions rooted in these reports through legal avenues.
“The 600 series had rubber skin. We spotted them easy. But these are new… they look human.”
The Terminator
To differentiate between legitimate and AI-generated documents, NGOs can employ several strategies:
- Verify References: Randomly sample citations from major reports to confirm authors, DOIs, and the existence of cited journals using tools like Google Scholar.
- Identify AI Signatures: Use AI detection software to look for repeated citation structures or identical phrasing, which are telltale signs of AI-generated text.
- Leverage the Freedom of Information Act (FOIA): Request details on document drafting processes and agency AI policies. Undisclosed AI usage should raise concerns.
- Pursue Legal Action: Expect legal challenges under the Administrative Procedure Act or for violations of scientific integrity and disclosure policies as early as 2027.
The credibility of environmental decision-making is at stake, with AI hallucinations now recognized as a structural issue rather than a mere technical glitch. As these inaccuracies infiltrate scientific records and policy processes, they evolve from technical concerns into governance failures. It’s a stark reminder that some environmental decisions are based on non-existent data.
“Nobody trusts anybody now… and we’re all very tired.”
The Thing
While AI can play a supporting role in science and policy, it must not be treated as an ultimate source of truth. Until robust verification protocols are established, the responsibility to verify falls on NGOs, journalists, and diligent scientists.
Related
Original Story at www.southernfriedscience.com