Building Trustworthy AI in Government: Enablers, Guardrails,
and Engagement

Governments are starting to use AI in areas like public
services, tax work, and disaster response. When it works well,
AI can help people get answers faster, spot problems earlier,
and support better decisions. As a result, AI can improve
productivity, responsiveness, and accountability in
government.
However, many public AI projects stay in small pilots. This
happens because governments often lack skills, good data,
modern digital systems, and clear ways to measure impact.
These gaps can also increase risk aversion, so teams avoid
innovation even when the potential benefits are high.
The OECD proposes a simple way to understand
“trustworthy AI in government”: a framework with three
connected pillars. In the figure, the goal is in the centre. Around
it, the three pillars explain what governments must build and
do, so they can reach the public value goals shown on the
outer ring (productivity, responsiveness and accountability).
Enablers are the foundations. They include strong
governance, quality data, and digital infrastructure, as well as
skills and talent in the civil service. They also require
purposeful investment, smart public procurement, and
partnerships with non-government actors, so that AI systems
can be built and used reliably.
Guardrails are the safety systems that guide AI use. They
include ethics and risk management, transparency duties, and
monitoring and oversight bodies that can check results over
time. They can also be non-binding guidance or binding laws
and policies, along with enforcement measures. Tools like
impact assessment and auditing help keep these guardrails
practical. Still, guardrails should be proportionate: not every
rule fits every use case, or progress may stop.
Engagement means involving the people who are affected.
This includes working across levels of government, across
policy areas, and with the broader ecosystem (civil society,
businesses and researchers). It also includes citizens and civil
servants, and sometimes collaboration across borders.
Engagement pushes governments to design user-centred
systems, listen to concerns, and make necessary adjustments.
The main message is that trust is “unlocked” by the right
mix. If enablers are weak, AI cannot scale. If guardrails are
missing, harms grow. If engagement is shallow, solutions may
look efficient but feel unfair, and trust can fall.
(Adapted from oecd.org on February 22, 2026)
A expressão “risk aversion” pode ser corretamente compreendida como:¬