AGCIH Commentary Series — Governance Architecture & AI Regulation
Africa Governance & Civic Innovation Hub Governance First. Africa Sovereign.
Policy Commentary

Kenya Artificial Intelligence Bill, 2026:
Governance Architecture, Administrative Capacity, and the Rule-of-Law Threshold

Danai Hazel Kudya
Founder and Executive Director, Africa Governance and Civic Innovation Hub (AGCIH) — March 2026
Subject Bill AI Bill, 2026 (Senate Bills No. 4)
Jurisdiction Republic of Kenya
AGCIH Series Governance Architecture & AI Regulation
Executive Summary

A credible framework — with four structural gaps that must be closed before commencement

The Kenya Artificial Intelligence Bill, 2026 is one of the most substantive AI governance proposals currently before an African legislature. It establishes a Commissioner's Office, adopts a risk-based classification framework, introduces regulatory sandboxes, mandates transparency obligations, and creates criminal liability for specified violations. As a foundational legislative framework, it is credible.

This commentary does not dispute the Bill's ambition. It tests the Bill's implementability: whether the institutional architecture it creates can, in practice, sustain the governance functions it mandates on rule of law terms.

AGCIH's analysis identifies four structural gaps that, if not addressed before commencement, will produce a governance framework that exists on paper but operates unevenly — or harmfully — in practice.

Gap 01 Administrative Hosting Capacity Deficit
Gap 02 Procurement Entry Failure
Gap 03 Criminal Liability Before Standards
Gap 04 Relocation of Judgment Without Anchoring

The six recommendations in this commentary are each actionable within the Bill's existing structure and do not require a full redraft. They are addressed to the Senate, to the Office of the Attorney-General, and to technical working processes at committee stage.

Assessment

What the Bill Does Well

Before identifying structural gaps, it is important to characterise the Bill accurately. Several of its features are genuinely well designed and, in the African legislative context, represent forward movement.

Risk-Classification Architecture

Part V's four-tier framework — unacceptable, high, limited, and minimal risk — is the right structural approach. The inclusion of public administration as a high-risk sector in Section 25(2)(b) is particularly significant.

Transparency & Disclosure Obligations

Section 28's requirements — disclosure of system nature, limitations, automated decision-making extent, and bias mitigation — are substantive obligations, not aspirational language.

Workforce Impact Obligations

Section 33's requirement for workforce impact assessments and reskilling programmes is an uncommon and commendable feature in AI legislation globally, and particularly apt for the Kenyan labour market.

Advisory Committee Composition

Section 17's consultative nomination processes — through registered organisations rather than ministerial appointment — provide a structural buffer against capture.

Three-Year Review Requirement

Section 37's mandatory parliamentary review every three years builds in a self-correction mechanism that recognises AI regulation cannot be static. This is governance-literate drafting.

Structural Analysis

The Four Structural Gaps

AGCIH's analysis identifies four points in the Bill's architecture where the governance design fails to match the governance ambition. Each gap is identified by its location in the Bill, its precise nature, and its rule of law implications.

The Office of the Artificial Intelligence Commissioner is granted, under Section 10, one of the broadest regulatory mandates in Kenyan statutory law — encompassing risk assessments, conformity audits, sandbox management, nationwide literacy programmes, and enforcement. The Bill provides for appointment of staff and financial provisions, but these are necessary, not sufficient.

What the Bill does not provide is a phased operationalisation framework: a requirement that the Office demonstrates, by reference to defined minimum governance criteria, that it is institutionally ready to exercise particular functions before those functions are activated.

AGCIH Doctrine — Administrative Hosting Capacity

An institution must be capable of hosting a governance function before it can legitimately exercise it. Hosting means more than formal establishment. It means the institution can define boundaries, understand scope, supervise use, investigate failures, retain records, and explain its actions to affected parties and to the public. A regulator that cannot host its own mandate becomes a governance structure that generates legal exposure without generating legal protection.

The consequence is predictable. The Office will commence with its full mandate legally active before it has staffed, systemised, or operationally grounded the functions those demands require. Early enforcement taken without adequate capacity will be legally fragile. Early inaction in the face of legitimate complaints will erode public credibility.

⚠ Rule of Law Risk: Enforcement without institutional capacity; public credibility erosion

Section 34 is the Bill's entire provision governing AI use by public institutions. It reads, in full: "A public entity, including a county government, that uses an artificial intelligence system shall ensure compliance with this Act." That is the entirety of Section 34. One sentence.

The section does not specify who holds responsibility for compliance. It does not require a pre-deployment governance assessment. It does not address procurement — the moment at which public institutions are most exposed. It does not differentiate between a national ministry deploying a high-risk AI system in a criminal justice context and a county government using a scheduling tool.

AGCIH Doctrine — Procurement Entry

AI enters public administration primarily through procurement. The governance risk is highest at the moment of institutional entry — when a public body decides to acquire, integrate, or authorise a system within its administrative workflows. Post-deployment enforcement cannot substitute for governance at the entry point. By the time a regulator investigates a public institution's AI-related harm, the harm is already operational and often structurally embedded in administrative practice.

A governance framework that applies less rigour to the public sector than to private deployers inverts the accountability structure that rule of law governance requires. Public institutions exercise public authority. Their use of AI is subject to constitutional obligations of fairness, accountability, and transparency that private actors do not carry in the same form.

⚠ Rule of Law Risk: Government AI ungoverned at point of entry; less rigour than the private sector

Section 35 (Offences and Penalties) is active upon the Bill's commencement. Penalties include fines of up to five million shillings and imprisonment of up to two years for non-compliance with risk assessment and transparency obligations.

Yet Section 36(2) expressly defers to the Cabinet Secretary the detailed criteria for risk classification, the forms and processes for risk assessments, and the content requirements for transparency disclosures. These regulations have not yet been written and are not required to be written before commencement.

AGCIH Doctrine — Governance Readiness

Governance readiness means that the institutional and regulatory infrastructure necessary to exercise a power, enforce an obligation, or apply a standard must be in place before that power is exercised, that obligation is enforced, or that standard is applied. A Bill that creates criminal liability for non-compliance with standards yet to be defined is institutionally incomplete at commencement.

This creates a direct rule of law problem. The principle of legal certainty, recognised in the Constitution of Kenya under Article 47 (fair administrative action) and Article 10, requires that persons subject to law must be able to know in advance what the law requires of them. A criminal provision that activates before its compliance standards have been defined does not satisfy that requirement.

⚠ Rule of Law Risk: Violation of legal certainty; unenforceable on commencement

Section 32 requires that AI systems incorporate human oversight in critical decisions, including review mechanisms that allow a qualified person to intervene or override the system's outputs where decisions may affect human rights, safety, or societal well-being. This is the right principle.

The problem is Section 32(3). The definition of what constitutes a "critical decision" — and therefore the scope of the entire human oversight obligation — is deferred to Cabinet Secretary regulations. Until those regulations are made, a deployer can argue in good faith that their system's decisions do not fall within the undefined category of 'critical.'

AGCIH Doctrine — Relocation of Judgment

Relocation of Judgment occurs when formal authority is nominally retained by a human institution or officer, while operational reality reorganises decision-making around system outputs. The risk is not only formal delegation, but the condition in which accountability appears assigned but cannot, in practice, be located, exercised, or enforced. A provision that guarantees human oversight without defining when that oversight is required is a provision that permits the relocation of judgment while appearing to prevent it.

The definition of 'critical decisions' is a governance question, not a technical one. That choice belongs in primary legislation, not in subordinate instruments that may be delayed, contested, or quietly circumscribed.

⚠ Rule of Law Risk: Human oversight protection formally present but operationally unenforceable
Legislative Action

Six Recommendations for Amendment

The following recommendations address each structural gap with specificity. Each is actionable within the Bill's existing structure and does not require a full redraft. They are addressed to the Senate, to the Office of the Attorney-General, and to any technical working process at committee stage.

The Bill should be amended to include a phased operationalisation provision under Part II, requiring the Commissioner to publish a Governance Readiness Plan within six months of appointment. The Plan must demonstrate, for each category of function under Section 10, the staffing, technical systems, and administrative protocols that the Office has established — or has a funded timeline to establish — before exercising that function.

Specifically, the Bill should differentiate between:

  • Functions the Office may exercise from the date of the Commissioner's appointment (receipt of complaints, publication of the high-risk register, advisory functions)
  • Functions requiring demonstrated operational readiness before activation (enforcement action, conformity audits, imposition of administrative fines)

This phasing is not a limitation on the Office's mandate. It is the condition that makes the mandate credible.

Section 34 must be replaced with a substantive provision governing AI use in the public sector. At a minimum, the replacement provision should require:

  • That each public entity designates a named Accountable Officer for AI governance before deploying any AI system in its administrative operations
  • That any public entity intending to procure or deploy a high-risk AI system must notify the Commissioner and complete a pre-deployment governance assessment, the minimum content of which is defined in primary legislation
  • That procurement contracts for AI systems include specified governance provisions covering transparency, suspension rights, audit access, and vendor obligations in the event of system failure or harm to citizens
  • That public entities maintain documentation of AI system use sufficient to support internal review, external audit, and the exercise of rights by affected persons
  • That the Commissioner publishes a Public Sector AI Governance Code against which public entity compliance is assessed

The offences and penalties under Part VI should not come into force until the Cabinet Secretary has gazetted regulations prescribing, at minimum: the detailed criteria for risk classification under Section 25; the forms, processes, and timelines for risk assessments under Section 26; and the content and format requirements for transparency disclosures under Section 28.

This may be achieved by inserting a commencement provision specifying that Part VI comes into force on a date to be notified by the Cabinet Secretary in the Gazette, and that no such notice may be issued before the required regulations are in force.

The principle at stake is straightforward: criminal liability requires prior notice of the conduct it criminalises. That notice cannot be given by a Bill alone when the Bill expressly defers the operative standard to regulations yet to be written.

Section 32's human oversight obligation should not be contingent on regulations for its operational definition. The Bill should define, in primary legislation, a non-exhaustive list of decision categories that constitute 'critical decisions', including decisions affecting:

  • Liberty, welfare entitlements, employment, and housing
  • Health treatment and immigration status
  • Criminal justice outcomes and access to public services

The Cabinet Secretary may, by regulation, extend this list. But the baseline must be in the Act. The protection of human oversight in decisions affecting fundamental rights is not appropriate for subordinate instruments subject to ministerial discretion.

The Bill should require that the Commissioner, within ninety days of taking office, publish a public institutional readiness assessment identifying: the governance functions the Office is prepared to exercise; those requiring further resourcing; and a timeline for full operationalisation.

This assessment should be submitted to the relevant Parliamentary Committee and made publicly available. This is not a burden on the Commissioner — it is transparency about the institution's actual starting condition, and it creates the public accountability the Office will need to function with legitimacy from its earliest days.

The Bill's mandate extends across healthcare, education, agriculture, finance, security, employment, and public administration — each with an existing regulatory body. The Bill does not specify how the Commissioner's authority relates to the Communications Authority, the Data Protection Commissioner, the Capital Markets Authority, the Kenya Revenue Authority, or sector-specific regulators.

The Bill should include a provision establishing a mandatory coordination protocol between the Commissioner and named sectoral regulators, specifying lead authority, concurrent obligations, and conflict resolution mechanisms. Clarity on regulatory jurisdiction is not a luxury in complex institutional environments. It is the condition that makes enforcement coherent.

AGCIH Doctrinal Framework

Why Architecture Comes First

The analysis in this commentary is grounded in AGCIH's doctrinal framework for AI governance in public administration. Applied to the Kenya AI Bill, these doctrines function as diagnostic tools — identifying specific provisions that must be strengthened and design choices that, if left unaddressed, will produce governance failures that no amount of enforcement capacity can later repair.

AGCIH Doctrine Application to the Kenya AI Bill
Continuous Administration Public institutions administer continuously. AI systems enter this environment and alter its character over time, not at a single point. Governance cannot be treated as a one-time compliance check.
Administrative Hosting Capacity A regulator must be able to define boundaries, supervise use, investigate failures, and explain its actions before it operationally authorises any function. The Commissioner's Office Gap is a hosting capacity deficit.
Procurement Entry Doctrine Governance risk for public institutions is highest at the procurement decision. Section 34's failure to govern public sector AI procurement leaves the most consequential governance moment unaddressed.
Relocation of Judgment When accountability appears assigned but cannot in practice be located, exercised, or enforced, judgment has been relocated without authorisation. Section 32's undefined 'critical decisions' creates this condition.
Governance Readiness Readiness is not ambition. It is an institutional condition — defined responsibility, supervision protocols, documentation standards, and contestation pathways — before a governance function is exercised.
Institutional Friction AI does not enter neutral environments. Where governance architecture does not prepare institutions for legal procedures, professional duties, and constitutional obligations, frictions accumulate silently as over-reliance, poor documentation, or system-shaped decisions that appear attributable but are not.

Conclusion

The Kenya Artificial Intelligence Bill, 2026 is a serious legislative effort. It names the right problems, reaches toward the right principles, and creates a structural framework capable of genuine governance — if that framework is completed.

The four gaps identified in this commentary are not peripheral. They go to the centre of whether the Bill, as currently drafted, can produce the governance outcomes it seeks.

"The rule of law is not preserved by ambition. It is preserved by design. And in AI governance, the most important design question is this: can the institutions this legislation creates remain fully responsible for the authority they are given to exercise in public?"

Kenya has an opportunity to enact AI governance legislation that the continent can learn from. That opportunity is best taken by getting the architecture right before commencement — not by discovering its gaps after the Commissioner is in office and the public sector is using systems whose governance framework consists of a single sentence.

AGCIH submits that, with the amendments proposed in this commentary, the answer can be yes.

About AGCIH

The Africa Governance and Civic Innovation Hub (AGCIH) is an independent governance institution based in Harare, Zimbabwe. AGCIH works at the intersection of administrative law, digital governance, and institutional design, with a focus on AI and digital systems in African public administration.

AGCIH does not advocate, regulate, or deploy technology. AGCIH operationalises governance.

Contact

Africa Governance and Civic Innovation Hub (AGCIH)
Harare, Zimbabwe
admin@agcih.africa
www.agcih.africa

Founder and Executive Director:
Danai Hazel Kudya

All AGCIH doctrines referenced in this commentary — including Continuous Administration, Administrative Hosting Capacity, Relocation of Judgment, Procurement Entry Doctrine, Governance Readiness, and Institutional Friction — are the intellectual property of the Africa Governance and Civic Innovation Hub (AGCIH) and must carry AGCIH attribution whenever referenced. © Africa Governance and Civic Innovation Hub (AGCIH), 2026. All rights reserved.