Cabinet clears draft AI policy for public comment

Posted on Leave a comment

Cabinet has approved the publication of South Africa’s draft National Artificial Intelligence Policy for public comment, marking a decisive shift from strategy to implementation in the country’s approach to AI governance.

Minister in the Presidency Khumbudzo Ntshavheni said during a media briefing on 2 April that the policy is designed to ensure that the benefits and risks of AI are distributed equitably across society and across generations.

She said the framework aims to strengthen the government’s ability to regulate and adopt AI responsibly, while also encouraging local innovation, supporting job creation, and expanding access to AI-related skills.

According to legal and technology advisory firm ITLawCo, Cabinet’s approval signals a move “from aspiration to action”, with the policy expected to shape how AI is regulated and deployed across the economy.

Policy vs law

The draft AI policy is not legislation, but a strategic framework that will guide future regulatory development.

PH Attorneys explains that a policy framework is a roadmap of the government intentions and is not legally binding, meaning it cannot directly impose penalties. However, courts may use such policies as a secondary source of law for interpretive or persuasive purposes.

Law firm Baker McKenzie similarly notes that policies set out principles that are later translated into enforceable rules, while ITLawCo adds this approach allows regulators to remain flexible in fast-evolving areas such as AI.

The policy is expected to lay the foundation for future legislation, potentially including a dedicated National AI Act to guide lawmakers on how AI should be governed in South Africa.

In the interim, AI-related activities remain governed by existing legislation, including the Protection of Personal Information Act, the Consumer Protection Act, the Electronic Communications Act, the Electronic Communications and Transactions Act, and the Cybercrimes Act.

The road to AI regulation

South Africa’s path to an AI policy has been deliberate and incremental.

Work on an AI framework began in 2020 following the Presidential Commission on the Fourth Industrial Revolution. This was followed by South Africa’s contribution to an African Union AI blueprint in 2021 and the establishment of AI research hubs at local universities.

The most immediate precursor was the National AI Policy Framework published in August 2024, which benchmarked international approaches, including the EU AI Act and policies from countries such as the Netherlands, Chile, and Norway.

In February this year, the Department of Communications and Digital Technologies (DCDT) confirmed that the draft policy had cleared key administrative processes and secured interdepartmental support, paving the way for Cabinet approval.

DCDT outlines policy direction

Details on the draft policy were shared during a parliamentary briefing by the DCDT on 24 February.

The briefing confirmed that the draft policy had cleared the socio-economic impact assessment system and achieved concurrence across all director-general clusters.

The DCDT emphasised the need to balance the benefits and risks of AI, while highlighting concerns about the concentration of AI capabilities. It framed AI as a tool to drive inclusive economic growth, alongside improved access to skills, infrastructure, and innovation.

The department also outlined how the policy will be implemented. This includes the development of national ethical guidelines and standards, alignment with existing data protection and cybersecurity frameworks, collaboration with industry, academia, and civil society, and phased adoption across priority sectors.

Importantly, it confirmed that the government does not intend to establish a single AI regulator. Instead, oversight will be embedded within existing regulatory structures.

South Africa is therefore expected to adopt a sector-specific, multi-regulator model.

Baker McKenzie notes that this approach leverages existing supervisory frameworks across sectors such as financial services, healthcare, telecommunications, and education, while ITLawCo describes it as a pragmatic way to move quickly – albeit with the risk of fragmented standards.

What the policy intends to do: six core pillars

The draft policy is structured around six core pillars aimed at promoting responsible AI development and deployment:

  • Capacity and talent development – building national AI skills through education and training.
  • AI for inclusive growth and job creation – supporting broad-based economic participation.
  • Responsible governance – introducing safeguards around risks such as data misuse, cybersecurity, and misinformation.
  • Ethical and inclusive AI – addressing bias and ensuring fair outcomes.
  • Cultural preservation and international integration – protecting local languages and knowledge while engaging globally.
  • Human-centred deployment – prioritising accountability and societal impact.

At its core is a single guiding principle: that the benefits and risks of AI must be shared fairly across society and generations.

What this means for stakeholders

Businesses and organisations already deploying AI – in hiring, credit scoring, customer engagement, healthcare or operational decision-making – should begin structured reviews of their systems.

ITLawCo advises identifying high-impact use cases, mapping data flows, and assessing how AI-driven decisions align with existing legal obligations, including POPIA, the Copyright Act, the Patents Act, and the Competition Act.

Baker McKenzie similarly indicates that governance frameworks will need to evolve, with greater focus on explainability, accountability, and risk management. This includes conducting internal reviews of AI deployments, assessing model transparency, and ensuring alignment with existing governance frameworks and regulatory requirements. The firm notes that regulators are likely to assess AI through existing accountability frameworks rather than treating it as a standalone category.

For legal and compliance professionals, the policy points to more integrated and technically informed oversight. AI is expected to be regulated through existing supervisory frameworks, requiring co-ordination between legal, risk, IT, and data functions, with greater emphasis on documenting governance structures and risk controls.

For civil society and individuals, the policy brings issues such as algorithmic bias, data rights, worker displacement, and digital exclusion into sharper focus.

What happens next – and when?

Although Cabinet has approved the draft policy, it has not yet been formally gazetted.

ITLawCo indicates that publication is expected later in April, triggering a 60-day public comment period likely to run until about June.

The firm notes that the consultation process will be critical to ensure these concerns are reflected in the final framework, warning that issues such as algorithmic bias, data rights, worker protection, and digital exclusion may not be adequately addressed if the process is dominated by industry voices.

“Individual and community submissions carry weight. The public comment period is an opportunity to demand a policy that reflects the interests of all South Africans, not only technology companies and large institutions,” ITLawCo notes.

Baker McKenzie also emphasises the importance of participation in the 60-day public comment period.

“Early engagement may influence how sector-specific strategies are ultimately framed, particularly in relation to explainability and supervisory oversight,” notes Baker McKenzie.

Finalisation of the policy is anticipated during the 2026/27 financial year, with sector-specific regulations and guidance expected to follow from 2027 onwards.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *