As with the rest of the world, artificial intelligence (AI) technologies are changing industries and societies in Kenya fast, and the potential benefits of this technological leap for economies, especially in sectors such as health, agriculture, financial services, and public administration, are enormous. However, so are the risks and challenges related to data privacy, algorithmic bias, transparency, and accountability as AI systems proliferate. As more companies develop or deploy AI in Kenya, they must become more cognizant of the rapidly shifting legal and regulatory landscape.
Constitutional Rights and AI
The Constitution of Kenya 2010 guarantees rights and principles that can be exploited for AI governance. These include a right to privacy; a right to access information held by the State, including an obligation for the State and nonstate actors to publish information; freedom of the press, subject to fair regulation; freedom of assembly and association; a right to a clean and healthy environment; a right of the family to be protected; a right of access to basic goods and services, including water, housing, food, and primary healthcare; a right to education; the right to equality and freedom from discrimination; and a value for human dignity, human rights, and freedom; as well as the rule that every person has inherent dignity and the right to have that dignity respected and protected.
Companies must ensure that their development and use of AI aligns with these constitutional principles.
Data Protection Act 2019
Much of the data that underpins many AI applications are built from personal data. Thus, one of the most important legislations on AI in Kenya is the Data Protection Act 2019 (which commenced operations in 2021). Section 29 of this Act stipulates the requirements for the processing of personal data, including:
Companies deploying AI must ensure their data governance frameworks comply with the Act's requirements.
Kenya Information and Communications Act
This is enforced through the Kenya Information and Communications Act 1998 (as amended), which provides a legal framework for investigating and prosecuting cybercrimes. These are relevant to AI governance:
Companies need robust cybersecurity measures to protect their AI assets and training data.
Draft Guidance and Ongoing Court Case
In 2019, for example, Kenya's digital ministry released the Emerging Digital Technologies for Kenya: Exploration Analysis report, arguing that regulation of AI was 'necessary' for 'safety, privacy, transparency, accountability and fairness.Proposed measures included algorithmic impact assessments, ethics review boards, and a top-level AI authority.
In any case, work is underway on a draft AI regulatory framework. When enacted, it will likely increase the obligations of companies that develop or deploy AI.
Last year, in a case still pending before the courts, Lawyers Hub Kenya and the Kenya Legal and Ethical Issues Network filed a petition to stop an AI-intensive Case Management and Tracking System that the Judiciary was planning to adopt to 'transform and digitize the systems and operations of the judiciary.' The petitioners have argued that using AI in the justice system without first putting in place an adequate regulatory framework violates constitutional rights. If the courts rule against the Judiciary, this could have wide-ranging implications for how AI is used in the public sector.
Conclusion
Although distinct AI-specific regulation in Kenya is still in its infancy, existing laws already provide
important guardrails. Companies aiming to exploit AI in Kenya must follow relevant laws on
constitutional principles, data protection, cybersecurity, and other sector-specific regulations. They
must also monitor emerging regulatory and judicial developments to stay ahead of the curve. By
developing robust AI regulation anchored in emerging Kenya law, companies the opportunities that
AI presents for job creation while mitigating its risks and heightening public confidence.