The governance timing problem
Across the world, artificial intelligence governance is often discussed when systems are already operational. Ethics reviews, impact assessments, audit requirements, and monitoring mechanisms are introduced after deployment pathways have been established.
This sequence produces a predictable outcome. Governance struggles under operational pressure.
Institutions adopt principles, committees, and review frameworks. However, when systems become embedded in service delivery such as tax administration, welfare eligibility, licensing, policing, or judicial processes, operational continuity overrides advisory controls.
The issue is therefore not the absence of governance. It is governance that arrives too late to influence system behaviour.
AI governance faces a timing problem rather than a purely technical problem.
From principles to intervention capacity
Global discussions often assume governance fails because organisations lack ethical standards or regulatory clarity.
In practice, governance fails when institutions lack the capacity to intervene in a live system.
The real test of governance is not whether risks were documented but whether an authorised actor can pause a system, require an explanation, compel vendor cooperation, override automated outputs, trace decision logic, and investigate harm.
Without intervention capacity, governance frameworks remain descriptive rather than operational.
Governance becomes meaningful only when authority survives operational pressure.
The hidden entry point: procurement
Most public sector AI systems do not begin as AI projects.
They enter institutions through routine administrative processes such as digital transformation programmes, enterprise software procurement, outsourced platforms, and decision support tools.
AI therefore enters the state through procurement.
By the time a system is deployed, decisive governance conditions are already fixed. These include disclosure obligations, audit access, contestability of outcomes, traceability of decisions, and contractual suspension rights.
At that stage regulators are not governing technology. They are inheriting contractual reality.
AI accountability is largely determined before the system is built.
Procurement as governance infrastructure
Digital procurement systems are commonly framed as transparency tools. Their deeper function is institutional.
They define the boundary between administrative authority and technological autonomy.
Where procurement rules require explainability, audit access, incident reporting, and intervention rights, governance becomes enforceable.
Where they do not, oversight bodies depend on voluntary cooperation after harm occurs.
Procurement is therefore not a preliminary administrative step. It is the architecture within which later regulation operates.
Governance does not begin at monitoring. It begins at acquisition.
Why enforcement discussions miss the point
Policy debates frequently focus on monitoring AI systems after deployment through audits, certification, penalties, and compliance reviews.
These mechanisms are necessary but structurally limited.
Once a system is embedded in revenue collection, public safety, or welfare delivery, stopping it becomes politically and operationally costly. Oversight shifts from prevention to damage control.
The enforcement challenge is therefore institutional rather than purely regulatory.
If intervention authority was not designed into adoption pathways, regulators supervise systems they cannot realistically halt.
The institutional view of AI governance
Effective public sector AI governance requires a shift in perspective.
Governance is not a document. It is an institutional capability.
This capability depends on three preconditions. Authority as legal power to intervene. Visibility as the ability to observe system behaviour. Leverage as contractual power over vendors.
All three are established before deployment decisions are finalised.
AI governance therefore starts earlier than commonly assumed, not at regulation but at acquisition.
Conclusion
The effectiveness of AI governance will not be determined by the sophistication of ethical principles nor by the severity of penalties.
It will be determined by whether institutions can exercise authority over systems during moments of operational pressure.
That capacity is created long before deployment.
Public procurement, often treated as administrative procedure, is in reality the first layer of technological accountability.
AI governance does not begin when a system goes live. It begins when an institution decides the terms under which technology may enter.
If you cite or share this article, please attribute it to Danai Hazel Kudya.