AGCIH Logo
Africa Governance & Civic Innovation Hub  ·  Knowledge Paper Series
Knowledge Paper Series  ·  No. 002  ·  March 2026

AI in Courts and the Rule of Law

Why Judicial Readiness Is a Governance Question Before It Is a Technology Question

Danai Hazel Kudya
Founder & Executive Director, Africa Governance & Civic Innovation Hub (AGCIH)
March 2026  ·  agcih.africa
ISSN Pending  ·  © 2026 AGCIH  ·  All AGCIH Doctrines are the Intellectual Property of the Africa Governance & Civic Innovation Hub

Intellectual Property & Citation Notice

© 2026 Africa Governance & Civic Innovation Hub (AGCIH). All rights reserved. This publication may not be reproduced, distributed, or transmitted in any form without the prior written permission of AGCIH, except for brief quotations in academic or professional works with full attribution.

The doctrines of Continuous Administration, Administrative Hosting Capacity, Relocation of Judgment, Functional Drift, Institutional Friction, and Governance Readiness — as defined in this publication — are original intellectual contributions of the Africa Governance & Civic Innovation Hub (AGCIH). These doctrines are protected as the proprietary analytical framework of AGCIH. Any use, citation, or application in academic, policy, legal, or institutional work must carry explicit attribution to AGCIH and reference this publication.

Suggested citation: Kudya, D.H. (2026). AI in Courts and the Rule of Law: Why Judicial Readiness Is a Governance Question Before It Is a Technology Question. AGCIH Knowledge Paper 002. Harare: Africa Governance & Civic Innovation Hub. agcih.africa

For permissions, licensing, institutional use enquiries, or collaboration: contact@agcih.africa

Executive Summary

Artificial intelligence is no longer approaching justice systems as a distant possibility. It is already entering courts, tribunals, judicial training institutions, registries, legal research environments, case management platforms, and administrative workflows. Recent UNESCO guidance for the judiciary reflects a significant global acknowledgement: the central question is no longer whether judicial institutions will encounter AI, but whether they are institutionally prepared to govern its use without weakening judicial independence, due process, accountability, and public trust.

This paper argues that AI in justice must not be treated primarily as a technical modernisation issue. It is, before all else, a rule-of-law issue. The core question is not simply whether AI can support efficiency, research, or workflow. The deeper and more consequential question is whether judicial institutions can preserve lawful authority, attributable judgment, procedural fairness, and reviewable decision-making in environments increasingly shaped by automated systems.

As AI tools become more capable — and as some move from assistance toward recommendation, prioritisation, drafting, triage, and action-shaping functions — the governance problem sharpens. Responsibility can become diffused across software vendors, court administrators, model providers, data infrastructures, judicial officers, registry staff, and institutional users. This diffusion of agency creates a direct rule-of-law risk: when authority becomes operationally dependent on systems that institutions do not meaningfully govern, accountability becomes difficult to locate, explain, contest, or repair.

This paper advances a governance-centred framework drawing on six original AGCIH doctrines — Functional Drift, Continuous Administration, Administrative Hosting Capacity, Relocation of Judgment, Institutional Friction, and Governance Readiness — each defined at its first point of application and constituting the proprietary analytical framework of the Africa Governance & Civic Innovation Hub. Courts do not merely adopt AI. They must govern it. And they must do so without surrendering the legal and institutional conditions that make justice legitimate.

1. The Judicial Governance Moment

Global conversations on artificial intelligence in the public sector have evolved quickly. Early debates centred on innovation strategies, ethical principles, and digital transformation ambitions. Those discussions were necessary. But the governance moment has now shifted.

AI is increasingly entering public institutions not as a distant policy concept, but as operational infrastructure. In the justice sector, this includes legal research support, transcription, translation, case sorting, workflow management, drafting assistance, and other decision-support functions. UNESCO's rule-of-law work has identified growing use of AI tools by judicial actors alongside limited institutional training and guidance — a gap its recent judicial materials directly seek to address.

This is an institutional warning, not merely a capacity observation. AI is already present in justice settings, but the governance architecture around its use remains uneven, incomplete, and in many contexts underdeveloped. Judicial AI is not a future-facing capacity issue. It is a present constitutional, administrative, and rule-of-law issue.

The question is not whether courts will encounter AI. It is whether they will encounter it on rule-of-law terms — with authority intact, accountability traceable, and institutional control genuinely held.

The distinction matters precisely because justice is not a technical domain. Courts are constitutional institutions. Their legitimacy derives not only from the correctness of their outcomes, but from the integrity of the processes through which those outcomes are reached. Any framework for AI in courts that begins and ends with efficiency is answering the wrong question.

2. From Assistance to Authority-Sensitive AI

Not every use of AI in courts raises the same level of institutional concern. Some functions are clearly assistive: summarising large records, translating documents, transcribing hearings, improving search, or supporting internal administrative workflow. Such uses do not necessarily displace adjudicative authority.

The governance threshold rises when AI begins to shape the pathway of institutional action — when systems influence which matters are prioritised, what legal materials are surfaced or omitted, how draft outputs are framed, how procedural steps are triggered, or how officials rely on machine-generated reasoning as a practical substitute for independent judgment. The threshold is defined not by a tool's label but by its functional relationship to the exercise of public authority.

This distinction carries constitutional significance. A justice institution may permissibly use AI to support work. It cannot allow support systems to become hidden substitutes for judicial reasoning, procedural discretion, or attributable decision-making. The danger is rarely formal delegation — it is what AGCIH terms Functional Drift.

AGCIH Doctrine
Functional Drift
The process by which an AI system introduced as an assistive tool progressively reorganises institutional behaviour around its outputs — without any formal delegation of authority — until the institution continues to appear in charge while practical decision-making authority has quietly migrated elsewhere. Functional Drift leaves no clear point of delegation, no formal record of transfer, and no accountable official who authorised the shift. In a justice context, Functional Drift is not merely administrative risk — it is constitutional risk.

Functional Drift is distinguishable from explicit automation precisely because it leaves no clear point of delegation to examine. The institution continues to act in its own name while the architecture of its reasoning is restructured from within. The visible form of judicial independence is preserved while its substantive content is eroded.

3. The Core Rule-of-Law Problem: Diffusion of Agency

The rule of law requires identifiable authority, explainable procedures, contestable decisions, and institutions capable of giving reasons for the exercise of public power. These are not procedural formalities. They are the structural conditions under which public authority can be held accountable by citizens, oversight bodies, and courts of review.

AI complicates this structure because it fragments agency across multiple actors and systems. In an AI-enabled judicial environment, the practical chain behind an outcome may involve a court, a software vendor, a model provider, a case management platform, a data pipeline, internal staff, external consultants, and a judicial user interacting with a machine-generated output.

The resulting governance problem — diffusion of agency — is distinct from ordinary institutional complexity. What AI introduces is not merely distribution but opacity — accountability chains that are not only dispersed but actively difficult to reconstruct after the fact. It is this opacity, not distribution alone, that weakens accountability in the rule-of-law sense. The remedy is not to concentrate accountability in a single official, but to ensure that accountability across the chain is legible, traceable, and enforceable — conditions that AI-mediated processes do not automatically satisfy.

Where accountability chains are both dispersed and opaque, the rule of law becomes ceremonial — formally present but functionally unenforceable.

4. Judicial Independence and Non-Delegable Authority

The non-delegability of core judicial functions is not a novel proposition introduced by AI. It is a principle with deep roots across administrative law traditions, constitutional doctrine, and the rule-of-law jurisprudence of international human rights bodies. Courts in common law jurisdictions have long held that a judicial body must reach its own independent conclusions and cannot merely ratify the determinations of another body. The African Charter on Human and Peoples' Rights and the jurisprudence of the African Court on Human and Peoples' Rights establish the right to a fair trial and an independent judiciary as foundational guarantees — requiring that the authority exercised in the name of judicial determination is genuinely exercised by the judicial institution itself.

What AI introduces is not a doctrinal novelty but a new form of pressure on a long-established principle. In most African jurisdictions, the legal architecture to enforce non-delegability against AI-mediated erosion remains underdeveloped. That is the governance gap this paper addresses.

There are functions within justice systems that are, in substance, non-delegable: the weighing of evidence and reasons, the evaluation of procedural fairness, the attribution of legal responsibility, the issuing of determinations, and the preservation of reviewable procedural integrity. These functions may be informed by technology. They cannot be transferred to opaque systems without altering the character of adjudicative authority itself.

Judicial independence is not a static condition. It must be actively maintained against the structural pressures that AI systems introduce — pressures that operate through convenience, dependency, and drift, not through confrontation.

5. Continuous Administration in the Justice Sector

Justice systems are not episodic forums that operate only when a hearing is held or an order issued. They are continuous institutions. Their legitimacy depends on sustained, lawful administrative presence across filing, scheduling, registry functions, document management, evidence handling, record continuity, review pathways, and procedural follow-through.

AGCIH Doctrine
Continuous Administration
The principle, developed by AGCIH, that public institutions operate not as episodic decision-makers but as ongoing administrative presences. AI enters this continuous environment and must therefore be governed across the full span of institutional activity — not only at discrete decision points — because its effects accumulate across time, workflow, and institutional culture.

AI enters the continuous judicial environment not as an isolated intervention but as a layer within ongoing institutional administration. That is precisely why governance cannot be applied only at the moment of a specific decision. A tool that appears useful at one point in the workflow may, over time, alter documentation standards, staff reliance, procedural timing, internal discretion, and record quality across the institution as a whole.

Continuous Administration requires more than service continuity. It requires continuity of lawful control. Justice institutions must remain capable of sustaining accountable, reviewable, rights-respecting administration even as their internal processes become more technologically mediated.

6. Administrative Hosting Capacity for Courts and Tribunals

The AGCIH doctrine of Administrative Hosting Capacity addresses the gap between acquiring a system and governing it. A court should not merely use an AI tool. It should be institutionally capable of hosting its use.

AGCIH Doctrine
Administrative Hosting Capacity
The institutional ability of a public authority to legally anchor, supervise, constrain, recalibrate, explain, and if necessary suspend an automated system operating within its governance environment. Hosting is distinct from procurement or deployment: an institution may have acquired a system it cannot meaningfully host, and in that gap accountability fails. First articulated by AGCIH in the context of AI and public administration governance.

To host a system in governance terms means the institution can define the tool's boundaries, understand its role, supervise its use, constrain its outputs, suspend it when necessary, investigate failures, retain meaningful records, and explain its implications for rights and procedure. Meaningful scrutiny requires access to audit logs and system outputs; contractual rights to investigate and suspend; internal expertise to evaluate whether outputs are consistent with legal requirements; and the institutional authority to override system outputs without operational penalty.

A court that cannot scrutinise, suspend, or override the systems it uses has not modernised. It has outsourced its authority — silently, and without a formal act of delegation.

Administrative Hosting Capacity asks a prior question before adoption: does the institution possess the governance ability to absorb AI without weakening its own authority? Where the answer is no, the appropriate response is to sequence adoption in ways that match hosting ability to the depth of function being automated.

AGCIH Doctrine
Relocation of Judgment
An AGCIH doctrine identifying the temporal displacement of administrative discretion upstream into system design, procurement, and configuration — such that the practical exercise of public authority occurs before deployment, not at the point of institutional decision. The institution that does not govern its AI at the design and procurement stage may find that its most consequential administrative choices have already been made for it, by others, before the system was ever switched on.

7. Institutional Friction in AI-Enabled Justice

AI does not enter a neutral environment. It meets legal procedure, evidentiary standards, professional duties, appellate structures, constitutional obligations, and public expectations of fairness. The collision points between AI systems and this existing architecture are what AGCIH terms Institutional Friction.

AGCIH Doctrine
Institutional Friction
An AGCIH doctrine identifying the structural collision between AI systems and the legal, procedural, and normative architecture of public institutions. Friction may be productive — surfacing governance gaps and forcing institutional deliberation — or destructive, accumulating silently as hidden injustice when institutions lack the capacity to detect, examine, or respond to it.

Institutional Friction in justice settings is no longer hypothetical. In the South African legal profession, two documented episodes have brought AI-generated fabricated citations before courts: first in a Johannesburg Regional Court in 2023 (Parker v Forsyth), and more extensively in Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal (2025), where only two of nine cited authorities were genuine. Both cases involved practitioner conduct — lawyers who submitted AI-generated material without verification. The locus of failure was the legal profession, not the judicial institution. In Mavundla, the court's response was itself a governance act: it identified the problem publicly and referred the matter to the Legal Practice Council.

The cases demonstrate a second dimension of Institutional Friction: professional governance gaps in the legal profession feed risks directly into courts. When lawyers operate without adequate professional guidance on AI, the burden of detection falls on the court — a burden courts can carry only if they have the awareness and training to recognise what they are encountering.

8. Governance Readiness: From Doctrine to Institutional Condition

The five doctrines developed in the preceding sections form an integrated diagnostic framework. Each identifies a specific point at which judicial authority is placed under stress by AI adoption, and each points toward the governance architecture required to address that stress.

AGCIH Doctrine
Governance Readiness
An AGCIH doctrine defining the demonstrable institutional condition in which a public authority has established the accountability structures, documentation standards, human oversight protocols, procurement safeguards, and contestation pathways necessary to absorb AI without weakening its own authority. Governance Readiness is not a declaration of intent. It is a verifiable institutional state — one that must be built from within, through deliberate sequencing of adoption to match hosting capacity.

The doctrinal framework generates six governance requirements, each derived directly from the analytical structure above:

Named accountability — responding to Functional Drift: named institutional responsibility for every AI system in judicial use, so that any drift of authority is accompanied by a traceable record rather than a grey zone.

Non-delegable oversight protocols — responding to the non-delegability principle: explicit protocols specifying what may be assisted, what must be reviewed, what may never be delegated, and when human override is mandatory.

Continuous audit trails — responding to Continuous Administration: documentation and record preservation across the full span of AI-mediated activity, not only at formal decision points.

Governance-aware procurement — responding to the Relocation of Judgment: procurement safeguards that reach backward into the design and configuration environment where discretion has already been exercised.

Practical contestation pathways — responding to diffusion of agency: mechanisms enabling individuals to challenge AI-shaped outcomes in ways that are real, not merely formally available.

Trained institutional awareness — responding to Institutional Friction: guidance and training covering AI-mediated risks entering through the surrounding professional environment, not only through internal systems.

Governance Readiness is not a slogan. It is a demonstrable institutional condition — and where it is absent, adoption is not merely premature. It is institutionally unsafe.

9. AI in African Justice Systems: A Differentiated Landscape

Across Africa, judicial institutions are engaging with AI from vastly different starting points, under different structural conditions, and at different stages of institutional development. The governance implications are similarly differentiated. Any analysis that treats African justice systems as a uniform category will misread the landscape and misinform the institutions within it.

What Is Already Happening

Morocco has initiated AI use for transcribing court rulings, conducting legal research, and retrieving archived texts, with plans to extend automated transcription to live sessions — including adapting systems to Darija and Amazigh linguistic diversity.

Tanzania has deployed Almawave, an AI-driven transcription and translation system trained on Kiswahili dialects and Tanzanian English, initially across eleven of its 169 courtrooms — addressing a specific bottleneck caused by a shortage of court stenographers.

Kenya is developing an Artificial Intelligence Adoption Policy Framework, with Chief Justice Martha Koome committing to address case management, legal research, predictive analytics, and administrative support, while safeguarding judicial independence, data privacy, and due process. Kenya has also shared its integrated case management expertise with the Federal Supreme Court of Ethiopia.

Rwanda has received international recognition for its integrated electronic case management system and has articulated the centrality of public trust — setting privacy, security, and fairness as non-negotiable conditions of AI adoption.

Nigeria's LawPavilion platform serves over 5,000 practitioners, with its 2025 TIMI AI assistant enabling rapid precedent retrieval. South Africa's emerging LegalTech ecosystem includes retrieval-augmented generation tools specifically engineered to address the hallucination risk already encountered in documented court cases.

At the regional level, the African Court on Human and Peoples' Rights and the East African Court of Justice have engaged with UNESCO on AI and rule-of-law issues. The African Network of Judicial Trainers committed at its 2025 Maputo meeting to incorporating UNESCO's judicial AI materials into national training curricula. The AU Continental AI Strategy (2024) provides a reference framework awaiting domestic operationalisation.

The Governance Gap Beneath the Activity

What this landscape reveals is not an absence of activity but a pattern of structural imbalance. Technical adoption is outpacing institutional governance capacity. In Malawi, AI use remains largely confined to individual progressive High Court judges, rarely extending to the magistracy — creating not only access disparity but accountability asymmetry. South Africa, conversely, has advanced significantly beyond early adoption, which is why professional governance gaps have already produced documented court-facing consequences. The governance challenge differs structurally across these contexts.

The Governance Architecture Opportunity

There is a genuine and time-sensitive governance opportunity available to African judicial institutions — conditional, however, not structural. Those judiciaries now establishing governance frameworks have the possibility of designing accountability architecture before adoption reaches the depth at which institutional dependency becomes structural. The operative word is possibility, not guarantee.

The condition is deliberate sequencing: treating governance architecture as a prerequisite for each stage of adoption depth, not as a retrospective exercise applied after systems are embedded. Where that institutional will exists — as Kenya's developing framework suggests — the architecture opportunity is real. Where it does not, late adoption provides no protection.

Africa does not need to import judicial AI governance doctrine from jurisdictions that adopted technology before they thought through accountability. African courts are positioned to develop that doctrine first — on their own constitutional terms, grounded in their own institutional realities — provided they act now, before the dependency has formed.

Conclusion: Courts That Govern AI Remain Courts

AI will continue to enter justice systems. The relevant institutional question is no longer whether courts will encounter it, but whether they will encounter it on rule-of-law terms: with authority intact, accountability traceable, and institutional control genuinely held.

The future of judicial AI should not be framed as a choice between innovation and caution. It is a question of institutional design. Courts may use AI. They may benefit from it. They may improve access, reduce backlogs, and enhance service through it. But they cannot surrender attributable judgment, procedural fairness, or institutional authority to systems they do not meaningfully govern — where governing means satisfying the six requirements derived from the AGCIH doctrinal framework set out in this paper.

The rule of law is not preserved by optimism about technology, or by ethics statements unaccompanied by enforcement mechanisms. It is preserved by institutional design — by the deliberate construction of governance architecture within which public authority remains accountable, explainable, and contestable, regardless of the operational medium through which it is exercised.

Courts that govern AI in this sense remain courts: institutions whose authority is lawfully held, humanly exercised, and judicially attributable. That condition is not self-sustaining in an AI-mediated environment. It must be designed, maintained, and actively defended. The moment a court ceases to govern the systems through which it exercises authority, it begins to cede the constitutional substance of its position — silently, incrementally, and without any formal act of surrender.

That is the governance risk that no ethics statement can prevent and no modernisation agenda can afford to ignore.

References and Sources

UNESCO. (2026). AI Essentials for Judges. Paris: UNESCO (AI and the Rule of Law Initiative).

UNESCO. (2025). Guidelines for the Use of AI Systems in Courts and Tribunals. Paris: UNESCO.

Lawyers Hub. (2024). Artificial Intelligence and the Future of Judicial Systems in Africa. Nairobi: Lawyers Hub.

UNESCO. (February 2025). Harnessing AI for Justice: Balancing Innovation and Equity in East Africa. 3rd Annual EACJ Judicial Conference, Kigali.

UNESCO. (November 2024). Advancing African Judicial Expertise in AI, Freedom of Expression and the Rule of Law. UNESCO/African Court Workshop, Nairobi.

Judiciary of Kenya. (August 2025). Judiciary to Leverage AI to Enhance Justice. Official statement of Chief Justice Martha Koome.

Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others (KZP) (unreported case no 7940/2024P, 8-1-2025) (Bezuidenhout J).

Parker v Forsyth NO and Others (Johannesburg Regional Court) (unreported case no 1585/20, 29-6-2023) (Magistrate Chaitram).

De Rebus. (May 2025). What Have the Courts Said About the Ethical Use of Artificial Intelligence in Legal Practice?

African Union. (2024). Continental Artificial Intelligence Strategy. Addis Ababa: African Union Commission.

Digital Watch Observatory. (March 2026). UNESCO and African Network Advance AI in Justice (ANJT Maputo Workshop).

Rouvroy, A. & Berns, T. (2013). Algorithmic Governmentality and Prospects of Emancipation. Réseaux, 177, 233–262.

Citron, D.K. (2008). Technological Due Process. Washington University Law Review, 85(6), 1249–1313.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.

Kudya, D.H. (January 2026). Rule of Law in the Age of Agentic AI. AGCIH Knowledge Paper 001. Harare: AGCIH.

Kudya, D.H. (March 2026). Procurement as the Gateway of Digital State Power. AGCIH Working Paper No. 003. Harare: AGCIH.