Executive Summary
Artificial intelligence is no longer a peripheral issue for justice systems. It is entering courts, tribunals, registries, legal research processes, drafting workflows, and case administration. The governance question — addressed here as a constitutional and administrative law matter — is whether courts can remain fully responsible for the authority they continue to exercise as their operational processes become increasingly AI-mediated.
This paper argues that the central danger posed by AI in courts is not exhausted by bias, inaccuracy, or efficiency loss. The deeper danger lies in the gradual migration of practical adjudicative authority into socio-technical systems that shape judicial action without themselves bearing legal responsibility. Courts may remain formally in charge while becoming operationally dependent on systems they do not meaningfully govern.
This paper advances a harder claim than institutional readiness alone: the most serious rule-of-law threat in judicial AI is Administrative Drift — the quiet migration of adjudicative authority through ordinary operational adoption, before any legal threshold is visibly crossed. To analyse this, the paper develops the AGCIH doctrine of Administrative Drift alongside Non-Delegable Judicial Authority, and applies AGCIH's companion doctrines of Continuous Administration, Administrative Hosting Capacity, and Institutional Friction.
The paper's primary original contribution is a set of seven rule-of-law tests — sequenced as two threshold tests (Non-Delegability and Attribution) and five operational tests — forming a diagnostic instrument for judicial institutions to determine whether AI use has crossed from permissible assistance into unconstitutional authority migration.
This paper is the doctrinal companion to AGCIH Working Paper 004. Working Paper 004 asks whether institutions are ready to govern AI. This paper asks the harder question: when AI is already present, has assistance already become authority?
1. The Governance Moment Inside the Courtroom
The governance moment has shifted. Earlier public-sector AI debates were framed around innovation, strategy, and ethical aspiration. Those debates served an important purpose. But they are no longer sufficient for judicial institutions already operating with AI inside their processes.
AI now enters justice settings through transcription tools, translation tools, case management environments, legal search systems, document processing, drafting support, data analysis, and increasingly sophisticated decision-support tools. The practice is documented, growing, and in the majority of judicial contexts ahead of any institutional governance framework designed to contain it.
Courts are not ordinary administrative sites. They are institutions through which public authority is exercised in its most sensitive form: the authoritative interpretation of law, the determination of rights, the supervision of procedure, the attribution of responsibility, and the production of reasons capable of review. Once AI enters this environment, governance cannot be treated as a soft afterthought. It becomes integral to legality itself.
2. From Assistance to Authority-Sensitive AI
Much current discussion remains trapped in an inadequate binary: AI either merely assists judges or fully replaces them. That framing is too crude to capture the real governance problem.
The relevant threshold is not replacement. It is the point at which assistance becomes authority-sensitive — when systems begin to shape what is visible, actionable, prioritised, or administratively thinkable inside the judicial process.
AGCIH Working Paper 004 introduced the doctrine of Functional Drift to describe how assistive tools progressively reorganise institutional behaviour around their outputs without formal delegation. This paper addresses a related but structurally distinct mechanism: Administrative Drift.
3. Diffusion of Agency and the Accountability Gap
The rule of law requires identifiable authority, attributable responsibility, reasoned action, contestability, and institutional answerability. AI complicates these conditions because it fragments the chain of action across judges, clerks, platform providers, model developers, case management systems, datasets, and procurement contracts that quietly define what is technically possible.
Diffusion of agency in courts produces a specific and serious accountability gap. Judicial legitimacy depends not only on substantive outcomes but on the integrity and accountability of the path by which those outcomes are reached. Where agency diffuses too far, accountability no longer travels with authority. The institution remains named as responsible while the architecture of responsibility has been distributed across actors and systems that bear no legal obligation for the result.
This asymmetry of visibility is a rule-of-law problem independent of any specific error: the litigant or accused whose case was shaped by systems they cannot interrogate is the least positioned to identify, articulate, or contest what has occurred.
4. The False Comfort of Human in the Loop
Judicial AI discourse often turns quickly to the phrase "human in the loop." The phrase is reassuring. It is also, in many judicial contexts, analytically insufficient.
A human being somewhere in the chain does not mean authority remains meaningfully humanly exercised. The issue is whether the human actor retains three distinct conditions: substantive control (genuine engagement with the material, not a system-ranked summary); informed control (knowledge of what the system contributed and omitted); and independent control (judgment not structurally channelled by defaults or institutional norms of reliance that make machine outputs presumptively authoritative).
A judge who signs a text largely shaped by machine-generated structure, ranking, emphasis, or omission is not exercising full judicial authority merely because the final act bears a human name. A clerk relying on AI-generated summaries under workflow pressure may be facilitating procedural dependence without intending to. A registrar following system prompts may be preserving process formally while surrendering discretion functionally.
Because judicial legitimacy depends on the actual exercise of independent judgment — not only on its appearance — the gap between formal human presence and substantive human control is a rule-of-law gap, not merely an administrative concern.
5. Non-Delegable Functions of Adjudication
Some functions within the administration of justice are constitutive of adjudication itself. These include the weighing of reasons, the evaluation of procedural fairness, the attribution of legal responsibility, the exercise of legally bounded discretion, the issuing of determinations, and the production of reasons sufficient for review. They may be informed by technology. They cannot be transferred to opaque systems without altering the character of judicial authority.
This is grounded in the administrative law principle — recognised across common law jurisdictions and in the jurisprudence of the African Court on Human and Peoples' Rights under Article 7 of the African Charter — that a body vested with discretionary authority must exercise that authority itself. What AI introduces is not a new principle but a new modality of pressure on an established one: the challenge is whether existing doctrine is applied with sufficient specificity to capture authority migration through Administrative Drift, not only through formal delegation.
6. The Migration of Practical Authority
The most consequential shift in judicial AI does not occur at the moment of final judgment. It occurs much earlier — when technical systems begin to structure the environment within which judicial actors work.
Practical authority migrates most consequentially at three specific points. First, at the point of information assembly: what materials reach the judicial actor, in what order, summarised by whom, and with what omissions. Second, at the point of procedural initiation: what triggers action, what moves a file forward, what defaults apply when no active decision is made. Third, at the point of drafting: what structural choices, language patterns, and emphases are embedded in generated outputs before any human editorial judgment is applied.
These are not peripheral workflow functions. They are the conditions under which judgment is formed. Administrative Drift occurs when these conditions are set by systems the institution did not deliberately configure, cannot meaningfully interrogate, and has come to rely upon as a structural feature of its operations.
7. Continuous Administration and Judicial Process
AGCIH's doctrine of Continuous Administration (Working Paper 004) holds that justice is sustained through a continuous chain — filing, registration, scheduling, disclosure, record management, hearing preparation, case tracking, evidence handling, drafting, appeal processing — not only at moments of decision.
AI enters that chain as a layer within continuous administration. A system introduced at one node may, over time, reshape behaviour across many others. Administrative Drift is most difficult to detect in this continuous environment: any single point, examined in isolation, may appear adequately supervised. The governance failure is cumulative — authority has migrated not from any one identifiable function, but across the administrative lifespan of judicial process as a whole.
8. Administrative Hosting Capacity and Reviewability
AGCIH's doctrine of Administrative Hosting Capacity (Working Paper 004) applies directly to the reviewability obligation. A court that cannot host an AI system institutionally — meaning it cannot define, supervise, suspend, investigate, and explain the system's role — cannot satisfy the reviewability requirement that rule-of-law governance demands.
Reviewability is a core rule-of-law threshold. A judicial process is not adequately governed because it produces an output. It must also produce a pathway that can be reviewed — by parties, appellate structures, and supervisory bodies.
A tension must be acknowledged: courts handle sensitive material where confidentiality is a legal requirement. The reviewability obligation must function within these constraints through internal reviewability mechanisms — audit trails accessible to supervisory bodies within appropriate confidentiality frameworks. The absence of public transparency does not justify the absence of institutional accountability.
9. Institutional Friction in AI-Enabled Justice
AI does not enter a neutral field. The collision points between AI systems and the legal, procedural, and normative architecture of courts are sites of Institutional Friction — an AGCIH doctrine developed in Working Paper 004.
In the judicial context, Institutional Friction takes forms specific to the constitutional function of courts: generated content may fabricate citations, compress nuance, omit relevant material, or quietly privilege administratively convenient pathways over procedurally fair ones. These are documented realities in African legal practice, as the South African cases of Parker v Forsyth (2023) and Mavundla (2025) demonstrate.
Institutional Friction also operates at a deeper level when AI shapes the informational environment of judgment itself — through ranked search, automated summarisation, or draft generation — introducing friction between the system's model of legal relevance and the institution's constitutional understanding of what is fair and legally adequate. Courts must also develop awareness of AI-mediated risks entering through the surrounding professional environment, not only through internal systems.
10. The African Judicial Context
The risks analysed in this paper are most consequential where AI adoption is outpacing institutional governance capacity and where structural conditions for Administrative Drift are most present: vendor-led procurement, externally funded digitisation, constrained institutional resources, and limited regulatory frameworks. This describes a significant proportion of African judicial institutions.
The specific contribution of this paper to that landscape is diagnostic. The seven rule-of-law tests in Section 11 are designed for use at any stage of AI adoption — including where AI is already operational — to determine whether administrative authority has already migrated beyond adequate institutional control. For institutions in early adoption stages, they are pre-deployment governance conditions. Where AI is already present, they are a readiness audit.
The most serious risk in the African judicial context is not that courts will formally decide to delegate adjudicative authority to automated systems. It is that the conditions of Administrative Drift will allow practical authority to migrate without any institutional deliberation about whether that is permissible.
African judicial institutions that now establish governance doctrine grounded in non-delegability, hosting capacity, and reviewability are not merely managing risk. They are setting constitutional standards for judicial AI governance on their own terms, drawing on their own legal frameworks, rather than inheriting standards developed in different constitutional traditions.
11. Seven Rule-of-Law Tests for AI in Courts
The governance framework developed in this paper generates seven rule-of-law tests, sequenced deliberately. Tests 1 and 2 are threshold tests: failure at either presents a fundamental rule-of-law problem that operational governance measures cannot resolve alone. Tests 3 through 7 are operational tests: they assess whether governance architecture around a permissible use is adequate. The threshold tests must be applied first.
These seven tests are not a compliance checklist. They are a diagnostic instrument. Failure at any test does not require the removal of the AI system — it requires a governance decision: whether the current use can continue at all, and if so, under what restructured conditions.
Conclusion: Whether Assistance Has Become Authority
The defining danger of AI in courts is not that machines will suddenly replace judges. The deeper danger is quieter: adjudicative authority may be slowly reorganised through ordinary administrative adoption while law continues to speak in the vocabulary of human judgment.
Administrative Drift describes precisely this — the migration of authority that occurs without any official act of transfer, through the accumulation of operational decisions that individually seem unremarkable and collectively reorganise the practical architecture of judgment.
The seven rule-of-law tests in Section 11 exist to answer the question this paper poses. Applied to any AI system already operating within a judicial context, they reveal whether the institution has retained — across attribution, non-delegability, reviewability, interrogability, institutional independence, procedural integrity, and remedy — the genuine governance conditions under which judicial authority remains lawful, accountable, and constitutionally legitimate.
Courts may modernise. They may use AI to improve access, reduce backlogs, and serve the public more effectively. But they cannot surrender attributable judgment, reviewable procedure, or institutional control to systems they do not meaningfully govern — without ceasing, in any constitutionally meaningful sense, to be courts.
The most important judicial governance question in the age of AI is not whether assistance is available. It is whether, in the specific context of this institution, with this system, performing this function — assistance has already become authority. Where it has, the tests show what must change. Where it has not, they show what must be protected.
References and Sources
Kudya, D.H. (March 2026). AI in Courts and the Rule of Law. AGCIH Working Paper 004. Harare: AGCIH.
Kudya, D.H. (January 2026). Rule of Law in the Age of Agentic AI. AGCIH Knowledge Paper 001. Harare: AGCIH.
Kudya, D.H. (March 2026). Procurement as the Gateway of Digital State Power. AGCIH Working Paper 003. Harare: AGCIH.
UNESCO. (2026). AI Essentials for Judges. Paris: UNESCO.
UNESCO. (2025). Guidelines for the Use of AI Systems in Courts and Tribunals. Paris: UNESCO.
UNESCO. (March 2026). African Network of Judicial Trainers Workshop on AI and the Rule of Law. Maputo.
African Court on Human and Peoples' Rights. African Charter on Human and Peoples' Rights. Article 7 (Right to Fair Trial).
Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal (KZP) (unreported, 8-1-2025) (Bezuidenhout J).
Parker v Forsyth NO and Others (Johannesburg Regional Court) (unreported, 29-6-2023) (Magistrate Chaitram).
Rouvroy, A. & Berns, T. (2013). Algorithmic Governmentality. Réseaux, 177, 233–262.
Citron, D.K. (2008). Technological Due Process. Washington University Law Review, 85(6), 1249–1313.
Pasquale, F. (2015). The Black Box Society. Harvard University Press.
Dworkin, R. (1986). Law's Empire. Harvard University Press.