According to Fortune, Deloitte just got caught using fabricated and potentially AI-generated research in a $1.6 million report for the Canadian government. The 526-page health report, published in May for Newfoundland and Labrador’s Department of Health and Community Services, contained false citations from made-up academic papers and attributed work to real researchers who never collaborated. The consulting firm admitted to “a small number of citation corrections” but denied AI wrote the report, claiming it was only “selectively used to support a small number of research citations.” This marks the second time in months Deloitte faced similar issues, following a $290,000 Australian government report that also contained AI hallucinations and fabricated references. As of Monday, the problematic report remains on the Canadian government’s website despite the revelations.
Deloitte’s AI Problem
Here’s the thing: when you’re paying $1.6 million for expert analysis, you expect actual expertise, not fabricated citations. The Independent’s investigation found Deloitte citing papers from journals that don’t exist and researchers who never worked together. One professor called out as a co-author on a non-existent paper said she’d only worked with three of the six other named authors. That’s not just sloppy research—that’s fundamentally broken quality control.
And Deloitte’s response? They’re making “a small number of citation corrections” that supposedly don’t impact the findings. But if your evidence is fabricated, how can anyone trust your conclusions? It’s like building a skyscraper on imaginary foundations and claiming the height measurement is still accurate.
Pattern Emerging
This isn’t a one-off mistake. Just last month, Deloitte had to refund the Australian government for a $290,000 welfare report that contained similar AI hallucinations. They quietly admitted using Azure OpenAI in that case after being caught. Now we’re seeing the exact same pattern in Canada—fabricated papers, fake quotes, non-existent research.
So what’s really going on here? Are consulting firms cutting corners with AI to save on research costs while charging governments millions? The timing suggests this might be becoming standard practice rather than isolated incidents. When you’re dealing with industrial-scale consulting work, the temptation to automate research must be enormous—but the consequences for public policy could be disastrous.
Trust Crisis
Look, governments hire firms like Deloitte specifically because they lack internal expertise. They’re paying for trusted analysis to inform critical policy decisions about healthcare staffing during shortages. When that trust gets broken with fabricated research, the entire consulting model starts to look questionable.
And let’s talk about the money. $1.6 million for a report with fake citations? The Canadian government paid in eight installments for work that apparently couldn’t pass basic academic verification. Meanwhile, Deloitte’s Australian operation had to provide a partial refund—will Canada demand the same?
What’s Next
Basically, we’re watching the consulting industry’s AI reckoning unfold in real time. The question isn’t whether firms will use AI—they clearly are—but how they’ll maintain quality control. When you’re dealing with critical infrastructure and public policy, getting the facts right matters more than ever.
For organizations relying on accurate data in manufacturing and industrial settings, this should serve as a warning. Whether you’re sourcing industrial panel PCs or commissioning million-dollar reports, verification matters. The leading suppliers understand that trust is built on delivering what you promise, not what an AI hallucinates.
Will governments start demanding AI disclosure clauses in consulting contracts? Should there be independent verification of research citations? One thing’s clear: the era of taking consulting reports at face value might be ending faster than anyone expected.
