As artificial intelligence becomes embedded in workplace decision-making, it is rapidly reshaping professional indemnity and director and officer risk. In an environment of stricter data protection and cybersecurity obligations, emerging AI governance frameworks and fragmented regulation, failures in oversight can quickly translate into claims, regulatory action and coverage disputes.
Legal and insurance specialists warn that organisations treating AI as a productivity tool only, rather than a governance risk requiring active control and accountability, are increasingly exposed when AI-driven decisions cause harm. The critical risk is not technological failure, but the assumption that responsibility shifts to the system rather than remaining with professionals, management, and boards.
Beyond algorithms and automation, the challenge lies in understanding where legal risk arises across the AI lifecycle, and how privacy, accountability and regulatory duties apply in practice.
In an article published on ENS’s website, Ridwaan Boda, an executive at ENS, and associate Shaaista Tayob say AI is no longer a peripheral workplace tool. By 2026, it is expected to be embedded across core business operations, employee workflows, and organisational decision-making.
A recent analysis published by the IBM Institute for Business Value identifies AI as a defining force shaping how organisations operate, manage talent, and compete over the coming years.
“IBM’s analysis focuses on the strategic and operational consequences of this acceleration, including increased AI adoption across the workforce, changing employee expectations, and growing pressure on organisations to demonstrate trust and accountability in the use of AI,” Boda and Tayob said.
“As AI becomes embedded in the workplace, organisations face a widening set of legal risks that extend beyond traditional compliance concerns and require a more structured and forward-looking approach to reputation management, privacy, accountability, intellectual property, and risk management,” they said.
Omar Ismail, a claims specialist at Santam, warns that liability remains human.
“As the volume of critical decisions and solutions provided (or suggested) by intelligence systems continues to surge, a question regarding legal liability becomes more pressing: What happens when an AI system causes harm, and who is ultimately accountable: developers, employers, or platforms?”
He adds: “An AI system operates without consciousness and therefore is not considered a legal person.”
The AI risk chain
Boda and Tayob highlight that AI-driven workplace risk no longer arises at a single point. Instead, it flows across a connected chain: data sourcing and training, system design and deployment, employee interaction with AI tools, and downstream consequences of AI-influenced decisions.
“This risk chain cuts across technology, people, vendors, and governance structures, and it challenges the assumption that legal exposure can be managed solely by the organisation’s human resources or compliance teams,” they said.
At the top of the chain sits data. AI systems rely on large volumes of information, often including personal employee data or information that can infer sensitive details.
“Decisions made at the data stage have lasting legal consequences. If employee information is collected, reused, or repurposed without a lawful basis or clear purpose limitation, those deficiencies do not disappear once an AI tool is deployed. They follow the system into live use and can undermine the legality of every decision that relies on it,” Boda and Tayob explained.
Risk then shifts to how AI systems are designed and deployed. Choices around automation, human oversight, explainability, and integration into decision making directly affect an organisation’s ability to explain outcomes, defend challenges, and demonstrate procedural fairness. “While IBM emphasises trust as a business imperative, from a legal standpoint trust is inseparable from the ability to evidence control and accountability,” they said.
AI-assisted decisions
As employees increasingly rely on AI-assisted outputs, another layer of risk emerges. Organisations remain accountable for outcomes, even where decisions are influenced by complex tools.
“This reliance introduces another layer of risk. Where employees are expected to act on AI recommendations, questions arise around training, delegation of authority, and whether individuals understand the limitations of the systems they are using,” Boda and Tayob said.
Legal exposure escalates when AI-driven decisions affect people. Hiring, performance management, remuneration, disciplinary action, and termination carry heightened risk. “Employers must be able to explain how decisions were made, identify who was responsible and demonstrate that legal and regulatory obligations were met. AI does not displace accountability; it merely reshapes how accountability must be managed,” they said.
Privacy and intellectual property
Privacy remains one of the most significant risk drivers, especially as workplace AI becomes more sophisticated and processes more sensitive employee data.
“Privacy risk is not limited to regulatory enforcement. It affects employee trust, labour relations and reputational standing. Treating privacy as a downstream compliance exercise is increasingly untenable where AI systems operate at scale and influence core employment outcomes,” Boda and Tayob said.
They also highlight intellectual property as a critical risk management blind spot, particularly around company and third-party IP used in AI systems, and questions over ownership rights.
Legal accountability in practice
Ismail explains that South Africa does not yet have laws specifically governing AI. Instead, general legislation such as the Protection of Personal Information Act applies. Section 71(1) of POPIA protects data subjects from automated decision-making that has legal consequences. For example, credit applicants profiled by AI could be subject to discriminatory outcomes, which is prohibited if based solely on AI-generated profiles.
“It creates a risk for firms if appropriate procedures to prevent sole reliance on automated decision-making are not implemented, which could lead to claims for breach. It could also expose directors for failing in their duty to the company to ensure implementation,” he said.
Courts continue to uphold traditional legal principles. In Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others, a law firm faced criticism and financial penalties for using AI to replace legal research, citing cases that did not exist. The Court said professionals remain responsible for outputs regardless of tools used, calling failure to validate work “irresponsible and downright unprofessional”.
Similarly, in Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others, the High Court reinforced a zero-tolerance approach to fictitious case citations generated by AI, even where junior counsel attempted to mitigate the situation. Ismail says the judicial intent is clear: professional duties remain attached to the person, irrespective of AI use.
“Regardless of intention or mitigating circumstances, our courts have been clear with their approach that a fundamental breach of professional duty would arise when a legal practitioner refers the court to fictitious case law, and the use of AI is not an acceptable excuse. It is a reasonable inference that this principle would be transposed onto other professions and would yield the same conclusion,” Ismail said.
He adds that professionals must ensure the authenticity of AI-generated outputs. “The professional would be held liable for failure of an AI-generated response and not the intelligence system itself.”
Growing regulatory scrutiny
Boda and Tayob point to increasing regulatory and societal scrutiny. Organisations in 2026 face fragmented requirements across data protection, cybersecurity, employment law, and emerging AI governance frameworks. In this environment, reactive compliance is insufficient.
“Organisations must embed legal risk management into every stage of procuring, governing and using AI,” they said.
Ismail cautions that as AI technology advances, its integration into professional decision-making could expose professionals to claims if incorrect outputs cause harm. “It is essential that professionals consider the risks associated therewith and discuss with their intermediary any risk transfer mechanisms (such as professional indemnity cover), subject to policy terms and underwriting criteria,” he said.




