Preventing AI Hallucinations: Lessons from the $1.6M Health HR Plan
- Evangel Oputa
- 7 days ago
- 7 min read
Updated: 6 days ago
Introduction
A 526-page Health Human Resources Plan in Newfoundland and Labrador was meant to guide 10 years of staffing decisions for doctors, nurses, and other health workers.
Cost: about $1.6 million.Problem: parts of the report relied on citations to research that does not exist, with AI involved in generating those references.
For governments, health systems, and large organizations starting to use AI, this is not just “someone else’s scandal.” It is a warning about what happens when AI is used in evidence-heavy work without proper governance.
In this post, we will break down:
What actually went wrong in the Health HR Plan
Why this is a governance failure, not a tech glitch
A practical “AI-Assisted, Human-Accountable” framework
How Begine Fusion helps organizations build the guardrails so this does not happen to you

The Problem: When AI-Generated “Evidence” Enters Real Policy
Newfoundland and Labrador hired Deloitte to create a comprehensive Health Human Resources Plan:
526 pages of analysis and recommendations
A 10-year roadmap for recruitment, retention, and staffing
Budget decisions and workforce plans expected to flow from it
Local media and academics later discovered that some of the report’s citations pointed to papers that simply do not exist:
Articles attributed to real researchers who never wrote them
References to journals and studies that could not be found in any database
These fake citations were used to support key claims, including the cost-effectiveness of certain incentives
Deloitte has acknowledged incorrect citations and said AI was “selectively used” to support a small number of references, while still standing by the report’s overall conclusions.
This is exactly how AI hallucinations become real-world risk:
AI proposes a “plausible” citation
No one verifies it properly
It ends up in a report that drives public policy and spending
Key Insight: The real failure is process, not technology
The problem is not that AI can hallucinate. The problem is that there was no enforced process to stop hallucinated outputs from reaching ministers, unions, and the public.
The Solution: AI-Assisted, Human-Accountable Governance
AI is not going away in consulting, policy, or internal reporting. Used well, it helps teams:
Scan large bodies of research faster
Draft options and scenarios
Reduce manual effort in early-stage analysis
But in high-stakes work (healthcare, justice, finance, public policy), you cannot let AI outputs flow straight into decision-making.
At Begine Fusion, our position is simple:
AI can assist the work. Humans must own the evidence and the decisions.
We call this approach AI-Assisted, Human-Accountable. It rests on a few non-negotiable rules:
Every AI-touched fact must be verified with a real source. If AI suggests a citation or summary, a human must find and read the actual paper, report, or dataset.
No AI-generated citation enters a report without a traceable reference. That means a URL, DOI, or document ID that a client can independently verify.
Every major report includes an AI Use Declaration. Where AI was used, how it was checked, and what was kept strictly human.
Random spot-checks are mandatory. A second reviewer re-pulls sources in critical sections to validate they exist and support the claims.
Clients receive an Evidence Appendix. A structured list of key sources linked to specific recommendations, so internal and external stakeholders can test the work.
This is the kind of governance layer that would have blocked hallucinated citations from ever making it into a $1.6M health plan.
Key Insight: AI Needs Guardrails
AI governance is not a buzzword. It is a concrete set of rules and workflows that decide what can and cannot reach the final document.
The Process: How Begine Fusion Builds AI Governance Into Your Reporting
Here is how we would approach this with a government department, health authority, or large organization that wants to use AI without repeating the Deloitte problem.
Step 1: Map Where AI Touches Your Work
We start by mapping your current and planned use of AI across:
Research and literature reviews
Internal reporting and strategy decks
Policy papers, white papers, and business cases
Data analysis and modelling
Goal: a clear view of where AI is already in the pipeline and where it is likely to appear next.
Step 2: Classify Risk Levels
Not all AI usage carries the same risk.
We work with you to classify:
Low risk: AI for formatting, drafting internal memos, brainstorming
Medium risk: AI summarization of documents that humans can easily re-check
High risk: AI generating or suggesting evidence, citations, legal reasoning, or numeric assumptions that feed budget or policy
High-risk usage is where we apply the strictest guardrails.
Step 3: Design Your AI Governance Rules
Next, we define specific rules and policies, tailored to your context, such as:
Where AI is allowed vs. not allowed
What must be manually verified before anything goes external
Approval workflows for reports that include AI-assisted work
Documentation requirements (AI Use Declarations, Evidence Appendices)
Everything is written in clear, operational language so non-technical teams can actually follow it.
Step 4: Implement Workflows and Templates
We embed these rules into your daily work, not just a PDF policy:
Updated report templates with sections for Sources and AI Use
Checklists for analysts and consultants before sending a draft
Simple forms or fields in your tools (e.g., Zoho, internal portals) to capture:
Where AI was used
Which sources were checked
Who did the final verification
This is where our digital adoption and AI systems expertise comes in: we do not just design the rules, we help you operationalize them.
Step 5: Train, Test, and Audit
Finally, we support your team with:
Short, focused training sessions on using AI responsibly
Test runs on real or historic reports to stress-test the process
Periodic audits of live work to ensure the rules are being followed
If issues show up (e.g., a missed verification step), we adjust the workflow and clarify responsibilities.
Key Insight: Governance Lives in the Workflow, Not in a Policy PDF
Your organization is safe when AI rules are built into everyday tools, templates, and approvals, not just written in a document no one reads.

The Outcome: What Strong AI Governance Actually Delivers
When organizations implement AI-Assisted, Human-Accountable governance, a few things change quickly:
Fewer reputation risks. You massively reduce the chances of fake citations, invented case law, or phantom data ending up in public reports.
More credible decisions. Executives and boards can see exactly what evidence underpins each recommendation and where AI played a role.
Clear accountability. There is always a named person responsible for the final output, not a vague “the system” or “the consultant.”
Faster, safer AI adoption. Teams become more confident using AI because they know the boundaries and the checks in place.
For governments and health systems, this translates into:
Stronger trust with unions, professional bodies, and the public
More resilience when reports are scrutinized by media or opposition parties
Better alignment between “innovation” and actual duty of care
Takeaways & Next Steps for Leaders
If you are responsible for strategy, policy, or large consulting engagements, here are the key moves:
Stop asking “Are you using AI?” and start asking “How do you govern AI?”
Treat evidence as critical infrastructure. Anything AI touches must be traceable and verifiable.
Require AI Use Declarations in major reports. Build it into your RFPs and contracts.
Give your teams and vendors a clear rulebook. If they do not have one, they are improvising.
Run an AI governance audit now, before a scandal forces you to.
Common Mistakes When Using AI in Research, Reports, and Policy
Mistake 1: Letting AI generate citations directly in final drafts
Why it fails: Models can invent plausible-sounding references that do not exist.
Better approach: Use AI to surface candidate sources, but require humans to locate and verify the real documents.
Mistake 2: Having an AI policy that is purely theoretical
Why it fails: A policy PDF no one reads does not change behaviour.
Better approach: Embed rules into templates, workflows, approval steps, and training.
Mistake 3: Not distinguishing between low-risk and high-risk AI usage
Why it fails: Treating everything the same leads to either over-restriction or chaos.
Better approach: Classify use cases by risk and apply stricter controls where evidence or law is involved.
Mistake 4: Assuming vendors “know what they’re doing” with AI
Why it fails: Even top firms can cut corners under time and cost pressure.
Better approach: Add specific AI governance questions and requirements into procurement and vendor reviews.
Mistake 5: Hiding AI usage from stakeholders
Why it fails: When something goes wrong, it looks like a cover-up and destroys trust.
Better approach: Be upfront. Declare where AI was used and how it was checked.
Mistake 6: No clear owner for AI-assisted outputs
Why it fails: If “the tool” is blamed, no one feels responsible for quality.
Better approach: Assign human sign-off for each deliverable and document it.
FAQ: AI, Hallucinations, and Governance
What is an AI hallucination in this context? It is when a model confidently generates text, citations, or “facts” that are not grounded in real data, like inventing a research paper or court case that never existed.
Is it safe to use AI for research at all? Yes, if you treat AI as a research assistant, not a source of truth. It can help you discover leads, summarize documents, and suggest angles, but humans must verify all critical evidence.
How do I know if my current reports are at risk? Look at where AI is already used. If it has touched citations, legal reasoning, or numeric assumptions, and there is no clear verification workflow, you should assume there is risk and run a targeted audit.
What is an AI Use Declaration? It is a short section in a report that explains where AI was used (e.g., drafting, summarizing, generating options), how outputs were checked, and who approved them.
Do I need a separate AI governance framework if I already have data governance?Yes. Data governance focuses on how data is collected, stored, and shared. AI governance covers how models are used, where they are allowed to influence decisions, and what checks are required.
What sectors need this the most? Any sector where reports influence real people’s lives or large budgets: healthcare, public sector, justice, finance, education, and regulated industries.
How does Begine Fusion support this in practice? We help design and implement AI governance frameworks, workflows, and templates; run audits on existing AI-assisted outputs; and train teams so they can use AI confidently without putting reputation and trust at risk.
Glossary (Quick Reference)
AI Hallucination: When an AI model generates plausible but false information, such as invented citations, quotes, or facts.
AI Governance: The policies, processes, and controls that define how AI can be used in an organization, including who is accountable and what checks are required.
Generative AI: AI models that create new content (text, images, code) based on patterns learned from training data.
AI Use Declaration: A section in a deliverable that discloses where AI was used, how its outputs were verified, and who approved the final content.
Evidence Appendix: A structured section listing all key sources, with links or identifiers, that support the recommendations and analysis in a report.
High-Risk AI Usage: Use of AI in areas where errors have serious consequences—such as legal reasoning, financial forecasts, health policy, and regulatory submissions.
Book a strategy session to review your current AI usage and identify your biggest governance gaps.




Comments