Public Authority in the Age of AI
Governing Decisions and Actions After Adoption
By Danai Hazel Kudya · February 2026
Across the world, governments are adopting national artificial intelligence strategies to support economic development, service delivery, and administrative efficiency. These strategies mark an important shift. Artificial intelligence is no longer treated as a distant technological frontier. It is becoming part of everyday public administration.
When adoption begins, the most immediate changes are practical. Agencies pilot systems. Procurement processes expand. Vendors propose solutions. Digital platforms evolve from record-keeping tools into decision-support environments. Gradually, certain administrative judgments become influenced by automated analysis.
At this point a subtle transformation occurs. Government is no longer only using technology to manage information. It begins using systems that participate in shaping decisions.
The question that follows is not technical. It is administrative. When a system contributes to a decision affecting a citizen, how is public authority being exercised?
The Administrative Nature of AI
Public institutions exercise authority through identifiable processes. An official acts within a mandate. A decision is recorded. A reason can be given. A citizen can challenge the outcome. Responsibility is traceable.
Artificial intelligence does not remove these requirements. It changes the environment in which they must operate.
A risk scoring model may influence eligibility. A triage system may prioritise inspections. An allocation tool may rank applicants. A predictive system may shape enforcement attention.
None of these systems replaces the State. But each alters how judgment is formed before a formal decision is issued.
The administrative system therefore evolves from human judgment supported by records to human authority supported by analytical systems. This distinction is important because administrative law was designed for human reasoning processes. AI introduces mediated reasoning processes.
The State still acts. But it acts through an additional layer.
The Emerging Responsibility Questions
Once automated analysis participates in decision formation, familiar administrative principles must be interpreted in operational terms.
Attribution: Which official is considered the decision-maker when analysis originates from a system?
Justification: How does an institution explain the basis of a decision influenced by computational reasoning?
Contestability: What pathway allows a citizen to challenge the outcome in a meaningful way?
Record: What constitutes the official administrative record: the final output, or the process that produced it?
Authority: Who is authorised to rely on the system and under what conditions?
These questions do not arise because AI is problematic. They arise because public authority must remain intelligible.
A Shift in Administrative Practice
Historically, government modernisation digitised forms and workflows. Artificial intelligence changes something deeper: it shapes how conclusions are reached before they are formalised.
This does not weaken public administration. Properly handled, it strengthens consistency, improves efficiency, and supports evidence-informed decisions.
However, as analytical capability increases, administrative clarity must increase alongside it. The legitimacy of the State rests not only on correct outcomes, but on understandable decisions. Citizens comply with decisions they can recognise as decisions of government.
Readiness Beyond Adoption
A national strategy enables institutions to explore and deploy new tools. It does not automatically determine how responsibility is structured once those tools influence outcomes.
The maturity of adoption is therefore not measured only by capability or usage. It is measured by whether authority remains clear when systems participate in reasoning.
Before disputes arise, institutions benefit from answering simple operational questions:
Who stands behind the decision? What explanation can be provided? How can the outcome be reviewed? Where is the official record? Under whose mandate was the system relied upon?
These questions are administrative, not technical. They ensure that innovation strengthens governance rather than obscuring it.
Continuous Administrative Action
Public administration has traditionally been organised around identifiable acts. An official receives information. A decision is made. A record is produced. Responsibility can therefore be traced to a moment in time and to a designated office.
Emerging AI systems introduce a different administrative posture. Some systems no longer merely assist a single decision. They initiate, sequence, and execute actions across an ongoing process. They monitor inputs, adapt behaviour, and continue operating within parameters set earlier by officials.
The question therefore changes. The issue is no longer only whether a specific decision can be explained. The issue becomes whether the institution retains meaningful control over a continuing chain of actions carried out in its name.
In such environments, responsibility cannot depend solely on identifying the final outcome. Governance must instead be anchored in the act of authorisation.
An institution must be able to show what the system was permitted to do, where its authority stopped, when human intervention was required, and who remained accountable throughout the operation of the process.
Without this clarity, administration shifts from delegated authority to diffused authority. Actions still occur, but responsibility becomes difficult to locate.
This does not require rejecting advanced systems. It requires recognising that continuous administrative action must be governed differently from discrete administrative decisions.
The rule of law is preserved not by limiting innovation, but by ensuring that institutional responsibility remains visible while systems operate over time.
The Measure of Readiness
Artificial intelligence marks a new phase of public sector modernisation. The task ahead is not to slow adoption and not to treat technology as exceptional. The task is to ensure that the structure of responsibility evolves alongside analytical capability.
Government authority ultimately rests on recognisable accountability. Even when supported by advanced systems, decisions must remain decisions of the State, and ongoing actions must remain actions carried out under identifiable mandate.
A strategy enables institutions to begin using new tools. Readiness is demonstrated when responsibility remains clear once those tools influence outcomes and processes.
In the coming years the practical test will be straightforward. Not whether public institutions use artificial intelligence, but whether authority, explanation, review, and accountability remain visible when they do.
Where that clarity exists, innovation strengthens legitimacy. Where it does not, efficiency arrives faster than trust.
The work of governing AI therefore begins after adoption, inside everyday administration, where public authority must continue to be understood by the citizens it serves.
Suggested citation
Kudya, Danai Hazel (2026). Public Authority in the Age of AI: Governing Decisions and Actions After Adoption. AGCIH Articles.