
Deloitte Australia agreed to partially refund the Australian government after AI-generated errors and fabricated citations were discovered in a departmental report prepared using Azure OpenAI. Image Source: ChatGPT-5
Deloitte to Refund Australia After AI Errors Found in Government Report
Key Takeaways — Deloitte’s AI-Linked Report Controversy
Deloitte Australia will refund part of the AU$440,000 ($290,000) payment for a report with apparent AI-generated errors.
The report included a fabricated court quote and nonexistent academic sources.
Generative AI tool Azure OpenAI was used in preparing the original report.
Researcher Chris Rudge identified up to 20 fabricated or incorrect references.
Critics, including Senator Barbara Pocock, have called for a full refund and greater accountability for AI use in public sector reports.
Deloitte to Refund Australia After AI Errors Discovered in Report
Deloitte Australia will partially refund the Australian government after a 237-page departmental report was found to contain numerous AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic papers.
The report, commissioned by the Department of Employment and Workplace Relations, cost AU$440,000 ($290,000) and was first published on the department's website in July. A revised version appeared last Friday after Chris Rudge, a University of Sydney researcher specializing in health and welfare law, alerted media outlets to the fabricated material.
Following an internal review, the department confirmed that “some footnotes and references were incorrect,” adding that Deloitte had agreed to repay the final instalment under its contract. The precise refund amount will be disclosed once the payment is completed.
AI Disclosure and Revised Report
The revised report now includes a disclosure acknowledging that a generative AI system, Azure OpenAI, was used in its preparation. It also removes several fabricated quotes and fictitious academic sources, including a wrongly attributed statement from a federal court judge.
Despite these corrections, the department maintained that the substance and recommendations of the report remained intact.
When asked for comment, Deloitte stated the “matter has been resolved directly with the client” but declined to confirm whether AI was responsible for the errors.
Researcher Identifies Dozens of Fabrications
According to Rudge, the initial version of the report contained up to 20 errors, including false citations to works by Australian academics. One glaring example incorrectly credited Professor Lisa Burton Crawford, a scholar of constitutional law at the University of Sydney, with authoring a nonexistent book.
“I instantaneously knew it was either hallucinated by AI or the world’s best kept secret because I’d never heard of the book and it sounded preposterous,” Rudge said.
He added that the authors appeared to have used “tokens of legitimacy” — citing academics without reading their work — and warned that misquoting a judge represented a serious legal inaccuracy.
“They’ve totally misquoted a court case then made up a quotation from a judge and I thought, well hang on: that’s actually a bit bigger than academics’ egos. That’s about misstating the law to the Australian government in a report that they rely on. So I thought it was important to stand up for diligence,” Rudge said.
Political Response and Calls for Accountability
Senator Barbara Pocock, the Australian Greens’ spokesperson for the public sector, criticized Deloitte’s handling of the report and called for a full refund of the government’s payment.
“Deloitte misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent. I mean, the kinds of things that a first-year university student would be in deep trouble for,” Pocock told the Australian Broadcasting Corporation (ABC).
Q&A: Key Questions About Deloitte’s AI Report Issue
Q1: Why is Deloitte refunding part of the payment?
A: The Department of Employment and Workplace Relations confirmed the report contained incorrect references and fabricated material, leading Deloitte to refund the final instalment of its AU$440,000 contract.
Q2: What kind of errors were found?
A: The report included invented academic papers, a fabricated quote from a federal court judgment, and misattributed references — all typical examples of AI hallucinations.
Q3: Was AI confirmed to have generated the errors?
A: Deloitte did not confirm the source of the inaccuracies, but the revised version of the report disclosed that a generative AI tool, Azure OpenAI, was used in its preparation.
Q4: Did the errors affect the report’s conclusions?
A: The department stated that the substance and recommendations of the report remained unchanged despite the identified inaccuracies.
Q5: What are the political implications of this case?
A: The controversy has fueled calls for greater transparency and regulation in the use of AI by government contractors, with critics arguing that AI-generated material should be subject to stricter review and disclosure standards.
What This Means: Accountability and Oversight in AI-Generated Government Work
The incident involving Deloitte Australia highlights a crucial turning point in how generative AI is being integrated into official documentation and consulting work. While AI tools like Azure OpenAI can streamline research and drafting, this case demonstrates that unchecked automation can lead to serious factual and legal errors — particularly in reports that inform government policy.
The partial refund signals both a financial and reputational setback for Deloitte, which is already under scrutiny for its consulting practices in the public sector. It also raises broader questions about accountability when AI tools contribute to work that carries legal or policy weight.
For the Australian government, this event may prompt a reassessment of AI usage standards across departments and contractors. Future procurement policies could include mandatory AI disclosures, human verification protocols, and independent audits to ensure accuracy and compliance.
The controversy also raises a fundamental question about human oversight in AI-assisted work. The absence of a “human in the loop” — a core principle of responsible AI governance — suggests that even major firms like Deloitte may not yet have fully embedded AI quality controls into their workflows. For a report intended to inform government decision-making, the lack of final human verification represents a striking lapse in diligence and accountability.
Globally, the case reinforces the need for ethical frameworks around AI-assisted authorship, echoing concerns voiced by academics, policymakers, and corporate leaders about the integrity of automated content. As organizations continue to adopt generative systems, ensuring transparency, traceability, and human oversight will be essential to maintaining public trust.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.