Insurance leaders urged to chart ethical course in the age of generative AI

Posted on

For all the hype, and pressure, to deploy generative AI (GenAI) in the insurance industry in right now, senior leaders must take the time to establish sufficiently robust governance models and policies that ensure responsible and ethical use of AI.

This is according to the 2024 edition of the annual EY Global Insurance Outlook released recently. The annual report explores the evolving industry landscape to inform the perspectives of senior leaders. Preparing for the transformative impact of AI is one of three issues focussed on in the latest edition.

According to Frank Schmid, chief technology officer (CTO) of General Reinsurance Corporation (Gen Re), given the technology’s power, the biggest AI risk for insurers and reinsurers is not to incorporate it in their operations and business models.

“AI will have a profound impact on the industry along two dimensions, near-term operational improvements and long-term strategic impacts, such as expanding access to insurance,” said Schmid.

However, the EY Outlook report states that while GenAI promises to revolutionise risk assessment, claims processing, marketing, sales and service and other essential aspects of the business, identifying the full range of risks and designing the right framework for managing them are the first priorities.

Will AI be a force for good?

The rapid adoption of ChatGPT and other applications has forced businesses to act quickly in identifying the right use cases for immediate-term performance gains and charting a course to longer-term transformation.

According to the EY CEO Outlook Pulse global survey (July 2023), 59% of insurance CEOs think that jobs impacted by AI will be counterbalanced by new roles, 58% say AI is a force for good and 52% plan significant investments in AI in the next year.

The EY Future Consumer Index (2023) states that 60% of consumers are optimistic about using AI for routine/repetitive tasks and to analyse data while just less than 60% say that they felt comfortable when AI is used for community safety, crime detection and workplace efficiencies.

In the EY Tech Horizon global survey (2022), 43% of global insurers plan to use AI and data science to predict trends and demand to better meet customer needs and optimise operations while 37% plan to use the technology to develop self-service tools to improve customer and employee experience.

A total of 9% plan to use it to automate processes to reduce labour costs, redeploy talent and improve efficiency. Another 9% aim to use it to drive product innovation through new offerings and personalization. Only 1% believe that data science and AI will not play a significant role while another 1% said they don’t know how they plan to use the technology.

AI use cases in the insurance market

Democratised access to a hugely powerful technology has launched countless creative applications, with many more on the horizon.

According to the EY Global Insurance Outlook, the most high-impact AI use cases in the insurance market are likely to include:

  • Actuarial and underwriting: streamlining the ingestion and integration of data to free underwriters to focus on high-value work that leads to stronger risk selection and more profitable pricing; enhancing product benchmarking.
  • Claims: automating first-notification-of-loss processes and enhancing fraud detection efforts.
  • Information technology: strengthening cybersecurity by analysing operations data for attempted fraud, monitoring for external attacks and documenting such attacks for regulatory reporting; generating code across languages (for example, to update COBOL applications) and documenting infrastructure and software upgrades.
  • Marketing and customer service: capturing customer feedback, analysing behavioural patterns and conducting sentiment analysis; tailoring interactions with virtual sales and service representatives; strengthening chatbots’ credibility and ability to resolve complex issues.
  • Finance, accounting and risk: preserving organisational knowledge; enabling real-time analysis and summarisation of documents; monitoring market and investment trends; producing more granular insights into financial and operational performance; creating educational content and interactive training for compliance and risk-management teams to keep current on the latest regulations.
  • Human resources: enriching workforce training and development curricula and materials; streamlining performance management and generating internal ratings; strengthening knowledge management and policy search.

EY’s Outlook states that early adopters are also exploring “human-in-the-loop” applications. For instance, GenAI-based copilots or co-bots can enhance the productivity and value of knowledge workers across the business.

In the future, EY Insurance says, more ambitious applications will help shrink the protection gap.

“For instance, data flows from satellites and other sensors will create detailed models of key infrastructure and digital twins for communities to run ongoing simulations for stronger and more precise protections,” the report reads.

Understanding AI’s risks

AI holds the potential for significant advantages, however, the associated risks, whether financial or otherwise, are just as substantial and interconnected. Take GenAI, for instance, which can personalise products and customer communications extensively, but this customisation also brings higher risks of privacy breaches, suitability issues, and discrimination violations.

According to the EY Outlook, to maximise the return on investment (ROI) from AI, it’s crucial to have a thorough understanding of these risks, especially those that are unique to specific businesses or parts of the organisation.

Risks shared in the report include:

  • Sensitive data: the potential misuse or mishandling of sensitive data, including personally identifiable information (for example, to fine-tune large language models, or LLMs) can lead to breaches of privacy.
  • Transparency issues: the black-box nature of some AI models makes it difficult to explain or understand their decision-making processes, raising concerns about accountability.
  • Biased and false outcomes: AI models, when trained on biased data, can spread or even worsen existing prejudices, leading to unfair policy terms and pricing or claims denials; hallucinations, where AI applications present false information or fabricate outputs from LLMs, are another concern.
  • Balanced human-AI collaboration: knowing when to apply human judgment versus following AI-generated recommendations can be challenging.
  • Privacy concerns: continuous monitoring (for example, through telematics and wearable devices) may be seen as invasive by consumers worried about constant surveillance.
  • Reliability and replicability: if not properly maintained or updated as conditions change, AI systems could produce inaccurate or outdated results that affect policy decisions and claim outcomes. Further, outcomes may begin to vary as inputs and LLMs change and the use of AI tools within workflows is adjusted.
  • Cyber: adversarial prompt engineering, manipulation of inputs and other attacks can lead to unintended fraudulent activities and the loss of training data or even a trained LLM model. Because LLMs are built on third-party data streams, insurers may be affected by external data breaches.

The EY Outlook cautions that legal liabilities and regulatory exposures are also significant, from potential copyright and IP infringement, to data-use infractions, to compliance with the General Data Protection Regulation and other rules.

“Widespread uncertainty about what is allowed and what companies will be required to report is a major concern,” the report states.

Spotlight on ethical and responsible use

According to the EY European Financial Services AI Survey (2023), the top three concerns among European insurance executives around the ethics of GenAI are privacy (31%), discrimination, bias, and fairness (26%), and transparency and explainability (21%).

EY Insurance states that, to a large degree, future consumer confidence will depend on the ethical deployment of AI and the delivery of unbiased results.

“Transparency in AI-driven processes – particularly when sensitive customer data is involved –­ is essential to building trust among customers, partners and regulators.”

The EY Outlook sets out these leading industry practices:

  • Establishment of an AI ethics committee with experienced professionals to set policies for ethical use and adjudicate sensitive disputes.
  • Ensuring the organisation has a comprehensive risk management framework in line with new requirements, with clearly defined accountability for the entire AI lifecycle.
  • Educating and training the workforce regarding the benefits and risks of AI in different functions and processes.
  • Maintaining a comprehensive database of AI applications across the business, including ones provided by external suppliers, that fully documents the data and processes used to train models.
  • Vigilance regarding “shadow AI”, or applications that have been deployed without proper vetting and sign-off.
  • Close monitoring of regulatory developments and anticipation of their impact and the requirements for timely compliance across jurisdictions.

Most carriers are still in the early phases of defining their governance models and control environments to address these risks.

“As early adopters, senior leaders recognise that effective risk management is critical to realising the full business value of GenAI, not just for avoiding regulatory penalties and negative brand impacts,” says EY Insurance.