The Chartered Insurance Institute (CII) has said institutions and individuals must be held accountable for decisions made using artificial intelligence (AI).
The CII made the call recently in recommendations submitted to the Treasury Select Committee (TSC).
It said institutions should be held responsible for decisions made by the algorithms that they use, “even if it is not feasible to explain in detail how the algorithms produce their results”.
It added that this accountability should be backed up with validation and testing, especially for “discriminatory harm” and that institutions should make the results of these tests public.
The 120,000-strong membership body also said professionals should be held to account for “the outcomes created by AI through design or monitoring”.
It said all professionals should be educated on the potential harm that can come from mis-managing AI.
Earlier this month the Treasury Select Committee called for evidence as part of its inquiring into AI in the UK financial services.
The inquiry will explore how the sector could take advantage of the opportunities in AI while mitigating any threats to financial stability and safeguarding financial consumers, particularly vulnerable consumers.
The CII said that a “proportionate, regulatory approach” to the use of AI in financial services would include implementation of a sector wide skills strategy, in which all employees of firms receive education on the potential and risks of AI, to strike the right balance between optimising its use and protecting consumers.
It added that this will allow heathy debate within firms and the wider profession about the most effective use of AI, increasing trust over the long-term.
The CII said that its submission draws on consumer research carried out over several years as part of its Public Trust Index and highlights the potential for AI to support key areas that consumers and SMEs are seen to value in insurance, including ‘cost’, ‘protection’, ‘ease of use’, and ‘confidence’.
In advocating for a focus on governance of AI within firms, the CII points to its Digital Companion to the Code of Ethics and Addressing Gender Bias in Artificial Intelligence, which set out practical steps that individuals and firms can take to use AI in a responsible way.
Dr Matthew Connell, director of policy and public affairs at the CII, said: “While AI has been employed within insurance for many years, it is important that we continuously assess how it can be optimised for both professionals and consumers.
“We welcome the opportunity to offer recommendations to the Treasury Select Committee and utilise the extensive consumer research carried out by the CII to inform this work on AI in financial services.”












