The Financial Sector Conduct Authority and the Prudential Authority have published a report that provides the first overview of the adoption of artificial intelligence within South Africa’s financial institutions.
The FSCA and the PA said they will develop a discussion paper building on the report and engage stakeholders on the key regulatory and supervisory questions.
The report’s findings are based primarily on a survey conducted across South Africa’s financial sector between October and December 2024. The voluntary survey received about 2 100 responses, covering segments such as banking, insurance, investments, payments, pensions, fintechs, and lending.
Across all sectors, 220 respondents (10.6%) use AI.
The report’s analysis focuses on the banking, insurance, and investment segments, because they are the largest segments within the financial sector by assets.
The study examined several specific areas:
- How AI is being adopted across the financial sector.
- The current and planned level of investment in AI.
- The specific applications of machine learning (ML) and the use cases for traditional AI and generative AI (GenAI).
- The benefits and risks associated with AI adoption.
- The regulatory and non-regulatory constraints to AI usage.
- Key ethical issues, such as privacy, fairness, accuracy, and transparency.
- How AI affects consumer protection, including transparency and disclosure requirements.
- The prudential risks associated with AI.
AI adoption and intended investment in the financial sector
The research shows that banks lead the financial sector in adopting AI, with 52% of respondents in the banking category indicating they have deployed some form of AI. Payment providers follow closely with 50% adoption. Retirement funds reported 14% adoption, while insurers and lenders showed the lowest levels of adoption, at 8% each.
AI investment intentions differ notably between segments. Banks plan to invest at far higher levels than any other sub-sector. Among institutions that currently use AI, 45% of banks intended to invest more than R30 million in 2024. In contrast, 62% of investment providers and 41% of insurers planned to invest less than R1m in AI.
Most institutions outside banking fall into the lowest investment category, indicating a generally cautious or exploratory approach.
Current and planned uses of traditional AI
Current use cases of traditional AI vary by segment but concentrate on operational efficiency and risk mitigation. Across the sector, traditional AI is widely used for internal process optimisation, including document analysis, workflow automation, and operational decision-support. Fraud detection is another significant application, particularly in banking, payments, and lending. Traditional AI is also used for credit scoring and underwriting, enabling more refined assessments of creditworthiness and pricing.
Planned future use cases show a continuation and deepening of these trends. Institutions indicate that they expect to expand traditional AI into areas such as real-time fraud detection, cybersecurity analytics, and enhanced monitoring to detect money laundering and terrorism financing, reflecting expectations of increased regulatory pressure and rising financial-crime risks.
Insurers intend to extend AI use in underwriting and claims management. Retirement funds and investment providers foresee expanding traditional AI into portfolio optimisation and risk modelling, although these remain early-stage intentions within their segments.
Generative AI: current and planned use cases
Generative AI adoption is at a much earlier stage but is expected to grow rapidly. Current GenAI use cases focus primarily on internal productivity tools, including document drafting, summarising reports, and generating presentations. Some institutions use GenAI to support marketing and customer communications, particularly in the creation of promotional material or client-facing content.
Planned use cases indicate a significant expansion of GenAI across multiple functions. Respondents note the potential for GenAI in customer-facing chatbots and automated service channels, allowing institutions to provide more efficient, round-the-clock service. GenAI is also envisioned for risk scoring, report generation, investment-related communications, and the automation of internal workflows.
Risks associated with traditional AI and GenAI
Despite strong interest in AI’s potential, the research identifies substantial risks. The most frequently cited risks are data privacy and protection concerns, reflecting the stringent requirements of the Protection of Personal Information Act (POPIA) and the growing sensitivity of consumer data. Institutions also highlight cybersecurity risks, including the possibility that AI systems could introduce new vulnerabilities or be exploited for malicious purposes.
Other risks include model risk, such as incorrect predictions, bias, or poor-quality datasets. Respondents also express concern about the potential for consumer harm if AI systems generate inaccurate advice or make opaque decisions. GenAI introduces additional risks, including the possibility of inaccurate or fabricated outputs, intellectual-property concerns, and the challenge of ensuring responsible use of large language models.
Constraints to the adoption of traditional AI and GenAI
Institutions identify several constraints limiting their use of AI. Data privacy and protection regulations are viewed as the most significant regulatory constraint on both traditional AI and GenAI adoption. POPIA’s requirements – particularly around data minimisation, processing purpose, and automated decision-making – were repeatedly highlighted.
In addition to regulatory issues, institutions point to a shortage of AI skills and the difficulty of attracting or developing specialised talent.
Many respondents note challenges relating to transparency and explainability, particularly for complex models. These issues are particularly relevant for high-risk applications in lending, insurance, and investment management. Resource constraints, including limited budgets, legacy systems, and competing priorities, also restrict adoption, especially among smaller firms.
Governance frameworks for AI
The report finds that governance frameworks for AI across the sector are still developing and vary widely in maturity. Many institutions rely on existing risk-management structures and do not yet have dedicated AI governance arrangements.
Respondents emphasise the need for frameworks that ensure accountability, human oversight, model validation, and ongoing monitoring of AI systems. Transparency and explainability tools – such as SHAP and LIME – are referenced as mechanisms to support internal governance.
Institutions also emphasised the importance of alignment with POPIA, particularly around automated decision-making. Some respondents note the need for clearer regulatory guidance to ensure consistent standards for AI use, including consumer transparency and disclosure when automated tools are used in customer interactions.
SA’s approach to regulating AI-enabled automated advice
The report notes that South Africa’s current regulatory framework for AI-enabled automated advice is shaped primarily by existing data-protection and financial-sector legislation, not AI-specific rules.
The country relies on POPIA and the Financial Advisory and Intermediary Services Act to govern situations where algorithms or automated systems generate decisions or advice that may have legal or significant effects on consumers. This approach positions South Africa within a global trend where financial authorities adapt existing principles-based frameworks to new technologies, while recognising the need for transparency, accountability, and appropriate oversight.
Automated decision-making under POPIA
The report outlines POPIA’s approach to automated decisions that have a legal or substantial effect on an individual. POPIA’s provisions are described as broadly aligned with Article 22 of the European Union’s General Data Protection Regulation (GDPR), which protects individuals from decisions based solely on automated processing that produce legal or significant effects – unless specific conditions are met, such as contractual necessity, legal authorisation, or explicit consent.
The report notes that although POPIA does not explicitly define “automated processing” or “profiling”, its intent aligns closely with the GDPR’s protection of individuals from harmful or opaque automated decisions.
Section 71 of POPIA allows for automated decision-making under certain circumstances, such as when it is necessary for a contract and the data subject’s request has been met, or when appropriate measures are in place to protect their legitimate interests. These protective measures include the right for individuals to make representations regarding the automated decision and receive sufficient information about the underlying logic of the automated process.
The report provides a practical example from the financial sector: while automated loan approvals may be permissible, automated rejections require additional safeguards such as human review or an appeal mechanism to ensure fairness, transparency, and compliance with POPIA’s intent.
Fit and Proper Requirements and the Code of Conduct
The Determination of Fit and Proper Requirements, issued under the FAIS Act, recognise and regulate “automated advice”.
Financial services providers offering such advice must adhere to certain standards, which include ensuring human resources and competence, oversight (including monitoring, reviewing and testing algorithms), implementing robust internal controls, maintaining system governance, and possessing adequate technological resources/infrastructure.
In addition, the General Code of Conduct, issued under the FAIS Act, contains general principles-based requirements that promote consumer protection and addresses themes such as conflicts of interest, advertising, disclosure, and risk management. These requirements are technology-neutral, meaning they will equally apply to, for example, robo-advisers.
The Conduct of Financial Institutions Bill will further regulate digital innovations in the financial sector based on outcomes, reinforcing consumer protection.
Please click here download the report, Artificial Intelligence in the South African Financial Sector.





