Skip to main content
Back to Newsroom
Policy14 January 2026

Our approach to AI safety in government correspondence

How we ensure AI-generated drafts are grounded in verified sources and never hallucinate policy positions.

By Public Pulse

When a constituent writes to their representative about a policy issue, the response carries the weight of that office. An inaccurate statement, a misrepresented policy position, or a fabricated statistic can damage trust and create real political consequences. This is why AI safety in government correspondence is not just a technical challenge but a democratic one.

Grounded generation

Every AI draft in Pulse is grounded in your office's knowledge base: approved talking points, previous correspondence, Hansard transcripts, and policy documents. The model cannot invent facts or policy positions. When Pulse generates a draft, it cites its sources with inline references that staff can verify before approval.

Human in the loop

AI drafts are never sent automatically. Every response goes through the office's approval workflow, where designated approvers review the content, verify citations, and make any necessary changes before the response is finalised. The AI is a tool that accelerates the drafting process. The human judgment and accountability remain with the office.