AGCIH
Africa Governance & Civic Innovation Hub  ·  Working Paper Series
Working Paper Series  ·  No. 005  ·  April 2026

When Assistance Becomes Authority

AI in Courts, Non-Delegable Judgment, and the Rule of Law

Danai Hazel Kudya
Founder & Executive Director, Africa Governance & Civic Innovation Hub (AGCIH)
April 2026  ·  agcih.africa
ISSN Pending  ·  © 2026 AGCIH  ·  All AGCIH Doctrines are the Intellectual Property of the Africa Governance & Civic Innovation Hub

Intellectual Property & Citation Notice

© 2026 Africa Governance & Civic Innovation Hub (AGCIH). All rights reserved. This publication may not be reproduced, distributed, or transmitted in any form without the prior written permission of AGCIH, except for brief quotations in academic or professional works with full attribution.

The doctrines of Continuous Administration, Administrative Hosting Capacity, Relocation of Judgment, Functional Drift, Institutional Friction, Governance Readiness, Administrative Drift, and Non-Delegable Judicial Authority — as defined and applied in this publication and in AGCIH Working Paper 004 — are original intellectual contributions of AGCIH. Any use in academic, policy, legal, or institutional work must carry explicit AGCIH attribution.

This paper is the doctrinal companion to AGCIH Working Paper 004 (March 2026). Working Paper 004 asks whether institutions are ready to govern AI. This paper asks whether authority has already moved.

Kudya, D.H. (2026). When Assistance Becomes Authority: AI in Courts, Non-Delegable Judgment, and the Rule of Law. AGCIH Working Paper 005. Harare: Africa Governance & Civic Innovation Hub. agcih.africa

Permissions and collaboration: contact@agcih.africa

Executive Summary

Artificial intelligence is no longer a peripheral issue for justice systems. It is entering courts, tribunals, registries, legal research processes, drafting workflows, and case administration. The governance question — addressed here as a constitutional and administrative law matter — is whether courts can remain fully responsible for the authority they continue to exercise as their operational processes become increasingly AI-mediated.

This paper argues that the central danger posed by AI in courts is not exhausted by bias, inaccuracy, or efficiency loss. The deeper danger lies in the gradual migration of practical adjudicative authority into socio-technical systems that shape judicial action without themselves bearing legal responsibility. Courts may remain formally in charge while becoming operationally dependent on systems they do not meaningfully govern.

This paper advances a harder claim than institutional readiness alone: the most serious rule-of-law threat in judicial AI is Administrative Drift — the quiet migration of adjudicative authority through ordinary operational adoption, before any legal threshold is visibly crossed. To analyse this, the paper develops the AGCIH doctrine of Administrative Drift alongside Non-Delegable Judicial Authority, and applies AGCIH's companion doctrines of Continuous Administration, Administrative Hosting Capacity, and Institutional Friction.

The paper's primary original contribution is a set of seven rule-of-law tests — sequenced as two threshold tests (Non-Delegability and Attribution) and five operational tests — forming a diagnostic instrument for judicial institutions to determine whether AI use has crossed from permissible assistance into unconstitutional authority migration.

This paper is the doctrinal companion to AGCIH Working Paper 004. Working Paper 004 asks whether institutions are ready to govern AI. This paper asks the harder question: when AI is already present, has assistance already become authority?

1. The Governance Moment Inside the Courtroom

The governance moment has shifted. Earlier public-sector AI debates were framed around innovation, strategy, and ethical aspiration. Those debates served an important purpose. But they are no longer sufficient for judicial institutions already operating with AI inside their processes.

AI now enters justice settings through transcription tools, translation tools, case management environments, legal search systems, document processing, drafting support, data analysis, and increasingly sophisticated decision-support tools. The practice is documented, growing, and in the majority of judicial contexts ahead of any institutional governance framework designed to contain it.

Courts are not ordinary administrative sites. They are institutions through which public authority is exercised in its most sensitive form: the authoritative interpretation of law, the determination of rights, the supervision of procedure, the attribution of responsibility, and the production of reasons capable of review. Once AI enters this environment, governance cannot be treated as a soft afterthought. It becomes integral to legality itself.

The governance question is not whether AI is present in courts. It is whether the conditions under which judicial authority is exercised have been quietly reorganised by systems outside the traditional architecture of law.

2. From Assistance to Authority-Sensitive AI

Much current discussion remains trapped in an inadequate binary: AI either merely assists judges or fully replaces them. That framing is too crude to capture the real governance problem.

The relevant threshold is not replacement. It is the point at which assistance becomes authority-sensitive — when systems begin to shape what is visible, actionable, prioritised, or administratively thinkable inside the judicial process.

AGCIH Working Paper 004 introduced the doctrine of Functional Drift to describe how assistive tools progressively reorganise institutional behaviour around their outputs without formal delegation. This paper addresses a related but structurally distinct mechanism: Administrative Drift.

AGCIH Doctrine
Administrative Drift
The quiet migration of practical adjudicative authority through ordinary operational adoption of AI systems — occurring not at a single identifiable point of delegation, but accumulating across workflow, procurement, vendor dependency, and institutional normalisation, such that practical control over the conditions of judgment has relocated before any legal threshold is visibly crossed. Administrative Drift is distinguished from Functional Drift (AGCIH Working Paper 004) in that it is structural and systemic rather than behavioural: it describes where authority has gone, not how institutional behaviour has changed around it.

3. Diffusion of Agency and the Accountability Gap

The rule of law requires identifiable authority, attributable responsibility, reasoned action, contestability, and institutional answerability. AI complicates these conditions because it fragments the chain of action across judges, clerks, platform providers, model developers, case management systems, datasets, and procurement contracts that quietly define what is technically possible.

Diffusion of agency in courts produces a specific and serious accountability gap. Judicial legitimacy depends not only on substantive outcomes but on the integrity and accountability of the path by which those outcomes are reached. Where agency diffuses too far, accountability no longer travels with authority. The institution remains named as responsible while the architecture of responsibility has been distributed across actors and systems that bear no legal obligation for the result.

This asymmetry of visibility is a rule-of-law problem independent of any specific error: the litigant or accused whose case was shaped by systems they cannot interrogate is the least positioned to identify, articulate, or contest what has occurred.

Where agency diffuses too far, accountability no longer travels with authority. The institution remains named as responsible while the practical architecture of responsibility has migrated elsewhere.

4. The False Comfort of Human in the Loop

Judicial AI discourse often turns quickly to the phrase "human in the loop." The phrase is reassuring. It is also, in many judicial contexts, analytically insufficient.

A human being somewhere in the chain does not mean authority remains meaningfully humanly exercised. The issue is whether the human actor retains three distinct conditions: substantive control (genuine engagement with the material, not a system-ranked summary); informed control (knowledge of what the system contributed and omitted); and independent control (judgment not structurally channelled by defaults or institutional norms of reliance that make machine outputs presumptively authoritative).

A judge who signs a text largely shaped by machine-generated structure, ranking, emphasis, or omission is not exercising full judicial authority merely because the final act bears a human name. A clerk relying on AI-generated summaries under workflow pressure may be facilitating procedural dependence without intending to. A registrar following system prompts may be preserving process formally while surrendering discretion functionally.

Human in the loop is not a governance standard. In judicial contexts, it risks becoming a formal mechanism through which Administrative Drift is legitimised rather than prevented.

Because judicial legitimacy depends on the actual exercise of independent judgment — not only on its appearance — the gap between formal human presence and substantive human control is a rule-of-law gap, not merely an administrative concern.

5. Non-Delegable Functions of Adjudication

Some functions within the administration of justice are constitutive of adjudication itself. These include the weighing of reasons, the evaluation of procedural fairness, the attribution of legal responsibility, the exercise of legally bounded discretion, the issuing of determinations, and the production of reasons sufficient for review. They may be informed by technology. They cannot be transferred to opaque systems without altering the character of judicial authority.

This is grounded in the administrative law principle — recognised across common law jurisdictions and in the jurisprudence of the African Court on Human and Peoples' Rights under Article 7 of the African Charter — that a body vested with discretionary authority must exercise that authority itself. What AI introduces is not a new principle but a new modality of pressure on an established one: the challenge is whether existing doctrine is applied with sufficient specificity to capture authority migration through Administrative Drift, not only through formal delegation.

AGCIH Doctrine
Non-Delegable Judicial Authority
The AGCIH principle, grounded in administrative law doctrine and international human rights jurisprudence, that certain core functions of adjudication cannot be transferred to automated systems without altering the legal character of adjudicative authority itself. Non-delegability is violated not only by formal delegation but by any operational arrangement that causes these functions to be substantively performed by systems the institution does not meaningfully control.

6. The Migration of Practical Authority

The most consequential shift in judicial AI does not occur at the moment of final judgment. It occurs much earlier — when technical systems begin to structure the environment within which judicial actors work.

Practical authority migrates most consequentially at three specific points. First, at the point of information assembly: what materials reach the judicial actor, in what order, summarised by whom, and with what omissions. Second, at the point of procedural initiation: what triggers action, what moves a file forward, what defaults apply when no active decision is made. Third, at the point of drafting: what structural choices, language patterns, and emphases are embedded in generated outputs before any human editorial judgment is applied.

These are not peripheral workflow functions. They are the conditions under which judgment is formed. Administrative Drift occurs when these conditions are set by systems the institution did not deliberately configure, cannot meaningfully interrogate, and has come to rely upon as a structural feature of its operations.

Authority may move without an official act of transfer. It moves through the accumulation of operational decisions that, individually, seem unremarkable — and collectively, reorganise the practical architecture of judgment.

7. Continuous Administration and Judicial Process

AGCIH's doctrine of Continuous Administration (Working Paper 004) holds that justice is sustained through a continuous chain — filing, registration, scheduling, disclosure, record management, hearing preparation, case tracking, evidence handling, drafting, appeal processing — not only at moments of decision.

AI enters that chain as a layer within continuous administration. A system introduced at one node may, over time, reshape behaviour across many others. Administrative Drift is most difficult to detect in this continuous environment: any single point, examined in isolation, may appear adequately supervised. The governance failure is cumulative — authority has migrated not from any one identifiable function, but across the administrative lifespan of judicial process as a whole.

8. Administrative Hosting Capacity and Reviewability

AGCIH's doctrine of Administrative Hosting Capacity (Working Paper 004) applies directly to the reviewability obligation. A court that cannot host an AI system institutionally — meaning it cannot define, supervise, suspend, investigate, and explain the system's role — cannot satisfy the reviewability requirement that rule-of-law governance demands.

Reviewability is a core rule-of-law threshold. A judicial process is not adequately governed because it produces an output. It must also produce a pathway that can be reviewed — by parties, appellate structures, and supervisory bodies.

A tension must be acknowledged: courts handle sensitive material where confidentiality is a legal requirement. The reviewability obligation must function within these constraints through internal reviewability mechanisms — audit trails accessible to supervisory bodies within appropriate confidentiality frameworks. The absence of public transparency does not justify the absence of institutional accountability.

Without a reviewable trace, contestation becomes performative. And where contestation becomes performative, the rule of law does not disappear — it becomes a ceremony.

9. Institutional Friction in AI-Enabled Justice

AI does not enter a neutral field. The collision points between AI systems and the legal, procedural, and normative architecture of courts are sites of Institutional Friction — an AGCIH doctrine developed in Working Paper 004.

In the judicial context, Institutional Friction takes forms specific to the constitutional function of courts: generated content may fabricate citations, compress nuance, omit relevant material, or quietly privilege administratively convenient pathways over procedurally fair ones. These are documented realities in African legal practice, as the South African cases of Parker v Forsyth (2023) and Mavundla (2025) demonstrate.

Institutional Friction also operates at a deeper level when AI shapes the informational environment of judgment itself — through ranked search, automated summarisation, or draft generation — introducing friction between the system's model of legal relevance and the institution's constitutional understanding of what is fair and legally adequate. Courts must also develop awareness of AI-mediated risks entering through the surrounding professional environment, not only through internal systems.

10. The African Judicial Context

The risks analysed in this paper are most consequential where AI adoption is outpacing institutional governance capacity and where structural conditions for Administrative Drift are most present: vendor-led procurement, externally funded digitisation, constrained institutional resources, and limited regulatory frameworks. This describes a significant proportion of African judicial institutions.

The specific contribution of this paper to that landscape is diagnostic. The seven rule-of-law tests in Section 11 are designed for use at any stage of AI adoption — including where AI is already operational — to determine whether administrative authority has already migrated beyond adequate institutional control. For institutions in early adoption stages, they are pre-deployment governance conditions. Where AI is already present, they are a readiness audit.

The most serious risk in the African judicial context is not that courts will formally decide to delegate adjudicative authority to automated systems. It is that the conditions of Administrative Drift will allow practical authority to migrate without any institutional deliberation about whether that is permissible.

African judicial institutions that now establish governance doctrine grounded in non-delegability, hosting capacity, and reviewability are not merely managing risk. They are setting constitutional standards for judicial AI governance on their own terms, drawing on their own legal frameworks, rather than inheriting standards developed in different constitutional traditions.

11. Seven Rule-of-Law Tests for AI in Courts

The governance framework developed in this paper generates seven rule-of-law tests, sequenced deliberately. Tests 1 and 2 are threshold tests: failure at either presents a fundamental rule-of-law problem that operational governance measures cannot resolve alone. Tests 3 through 7 are operational tests: they assess whether governance architecture around a permissible use is adequate. The threshold tests must be applied first.

Test 1  ·  Threshold
Non-Delegability
Has the AI system entered a function that is constitutively judicial — the weighing of reasons, attribution of responsibility, evaluation of fairness, exercise of bounded discretion, or production of reviewable reasons?
Failure signal: The system is performing or structurally shaping a non-delegable function. Deployment in this form is institutionally unsafe under a rule-of-law standard regardless of accuracy or efficiency.
Test 2  ·  Threshold
Attribution
Can a legally identifiable judicial actor genuinely own — with substantive, informed, and independent control — both the reasoning and the result? Formal signature is not sufficient.
Failure signal: Human presence is functioning as legitimising cover for a process the human actor does not substantively control. This is Administrative Drift in its most advanced form.
Test 3  ·  Operational
Reviewability
Is there a trace — accessible to parties, appellate bodies, and supervisory institutions within appropriate confidentiality constraints — sufficient to reconstruct the system's role, the materials it surfaced or omitted, and its influence on the procedural pathway?
Failure signal: Contestation is formally available but practically impossible. The rule of law is reduced to ceremony.
Test 4  ·  Operational
Interrogability
Can the institution meaningfully question, limit, suspend, or contradict the system's outputs? Does it have access rights, technical capability, contractual authority, and institutional culture to override system outputs without operational penalty?
Failure signal: The institution has acquired a system it cannot host. Administrative Hosting Capacity (AGCIH Working Paper 004) is absent.
Test 5  ·  Operational
Institutional Independence
Has practical authority over information assembly, procedural initiation, or draft production migrated to a vendor, platform, or opaque model environment that the institution cannot scrutinise or replace without operational collapse?
Failure signal: Vendor dependency has become structural. The institution is no longer fully governing on its own terms.
Test 6  ·  Operational
Procedural Integrity
Has the system altered timing, prioritisation, evidence handling, reasons-giving, or the field of considered options in ways that affect the fairness of the process — including in ways neither institution nor parties can currently see?
Failure signal: Institutional Friction is accumulating silently. Injustice may be occurring without any single identifiable act of wrongdoing.
Test 7  ·  Operational
Remedy
Can the person affected by the AI-shaped process challenge it, understand what the system did, and seek effective remedy in a way that is real and practically usable — not merely formally available?
Failure signal: Rights exist in law but not in practice. The rule-of-law obligation to provide effective remedy is not being met.

These seven tests are not a compliance checklist. They are a diagnostic instrument. Failure at any test does not require the removal of the AI system — it requires a governance decision: whether the current use can continue at all, and if so, under what restructured conditions.

Conclusion: Whether Assistance Has Become Authority

The defining danger of AI in courts is not that machines will suddenly replace judges. The deeper danger is quieter: adjudicative authority may be slowly reorganised through ordinary administrative adoption while law continues to speak in the vocabulary of human judgment.

Administrative Drift describes precisely this — the migration of authority that occurs without any official act of transfer, through the accumulation of operational decisions that individually seem unremarkable and collectively reorganise the practical architecture of judgment.

The seven rule-of-law tests in Section 11 exist to answer the question this paper poses. Applied to any AI system already operating within a judicial context, they reveal whether the institution has retained — across attribution, non-delegability, reviewability, interrogability, institutional independence, procedural integrity, and remedy — the genuine governance conditions under which judicial authority remains lawful, accountable, and constitutionally legitimate.

Courts may modernise. They may use AI to improve access, reduce backlogs, and serve the public more effectively. But they cannot surrender attributable judgment, reviewable procedure, or institutional control to systems they do not meaningfully govern — without ceasing, in any constitutionally meaningful sense, to be courts.

The most important judicial governance question in the age of AI is not whether assistance is available. It is whether, in the specific context of this institution, with this system, performing this function — assistance has already become authority. Where it has, the tests show what must change. Where it has not, they show what must be protected.

References and Sources

Kudya, D.H. (March 2026). AI in Courts and the Rule of Law. AGCIH Working Paper 004. Harare: AGCIH.

Kudya, D.H. (January 2026). Rule of Law in the Age of Agentic AI. AGCIH Knowledge Paper 001. Harare: AGCIH.

Kudya, D.H. (March 2026). Procurement as the Gateway of Digital State Power. AGCIH Working Paper 003. Harare: AGCIH.

UNESCO. (2026). AI Essentials for Judges. Paris: UNESCO.

UNESCO. (2025). Guidelines for the Use of AI Systems in Courts and Tribunals. Paris: UNESCO.

UNESCO. (March 2026). African Network of Judicial Trainers Workshop on AI and the Rule of Law. Maputo.

African Court on Human and Peoples' Rights. African Charter on Human and Peoples' Rights. Article 7 (Right to Fair Trial).

Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal (KZP) (unreported, 8-1-2025) (Bezuidenhout J).

Parker v Forsyth NO and Others (Johannesburg Regional Court) (unreported, 29-6-2023) (Magistrate Chaitram).

Rouvroy, A. & Berns, T. (2013). Algorithmic Governmentality. Réseaux, 177, 233–262.

Citron, D.K. (2008). Technological Due Process. Washington University Law Review, 85(6), 1249–1313.

Pasquale, F. (2015). The Black Box Society. Harvard University Press.

Dworkin, R. (1986). Law's Empire. Harvard University Press.