Probability & Partners: AI governance for financial institutions, meeting the requirements of high-risk systems

Probability & Partners: AI governance for financial institutions, meeting the requirements of high-risk systems

Risk Management Technology
Maurits van de Oever - Fabiana Liu (photos archive Probability & Partners).jpg

By Maurits van den Oever and Fabiana Liu, Quantitative Consultant and Junior Risk Management Consultant respecetively at Probability & Partners

As AI systems have become increasingly integrated in many sectors, from public to private, it raises challenges on multiple fronts, including technical, legal, and ethical considerations.

The European Commission took its initial step to create a comprehensive regulatory framework for products and services with the introduction of the New Legislative Framework (NLF) in 2008. The NLF aimed to standardize rules for the release of products in the EU internal market. Subsequently, in May 2018, the General Data Protection Regulation (GDPR) was implemented to strengthen the flow of personal data in the single market.

However, neither the NLF nor the GDPR adequately addresses unique risks and challenges posed by AI systems, like novel threats to fundamental risks such as non-discrimination against certain groups, unpredictable behavior, adversarial attacks, data poisoning, and lack of transparency.

To address these gaps, the European Commission proposed the first draft of the European AI Act (AIA) in April 2021. Although it may overlap with previous regulations, the AIA aims to establish an ex-ante governance framework to ensure the safe and trustworthy use of data-driven systems. Additionally, it seeks to contribute to a global consensus on AI trustworthiness.

Under the European AI Act, primary supervisory responsibilities will rest with national competent authorities and market surveillance bodies of member states. These entities will play a frontline role in ensuring proper implementation of the regulation. Additionally, the EU AI office and board will provide guidance and facilitate cross-border enforcement efforts.

Definition and classification of AI systems

According to the European Commission, the definition of an AI system in the AIA is formulated such that it is future-proof.  The effect of this is that the methodologies included in the definition are very general since new methods are developed quickly. Here, an AI system is defined as a machine-based system that infers from the input it receives how to generate output for explicit or implicit objectives that influence physical or virtual environments. This output includes making predictions, generating content, or making decisions.

The EU AI act follows a proportionate risk-based approach, in which each system is classified into one of the four categories based on its level of risk: low or minimal risk, limited risk, high risk, or unacceptable or prohibited risk. Each level corresponds to a set of specific requirements tailored to mitigate associated risks.

Key dimensions for OECDs risk classification of AI systems are People & Planet, Economic Context, Data & Input, the AI model, and Task & Output. This framework allows for a nuanced approach to classification, considering different aspects of the system's implementation. However, the limits of crossing over into another risk category are not yet defined, so knowing if your system falls into a certain category is difficult. Certain examples of high-risk systems are given, such as the operation of critical physical or digital infrastructure, and energy and water supply. Examples that are more realistic in the financial sector include using systems for recruitment, vocational training, and credit risk applications.  

Low-risk systems like AI-enabled video games or spam filters, are exempt from specific requirements other than drawing up a code of conduct. Systems with limited risk are those that involve any possible interaction with humans or produce manipulable content, such as chatbots and deepfakes, and are subject to transparency obligations, meaning the provider must inform the users that they are interacting with an AI system. High-risk systems are those applications that significantly impact the user's life and are the categories subject to the most compliance requirements. Systems classified as posing ‘unacceptable risk’, such as systems for social scoring, facial recognition, and manipulation, are prohibited from being developed, deployed, and used.

Financial institutions need to comply with additional requirements when they employ AI systems that fall under the high-risk category. These requirements primarily focus on creating a robust risk management framework around the development and implementation of these systems. They encompass all stages of the development process, including high-quality datasets, technical documentation of the model, model robustness and accuracy, and logging of activities for result traceability, which are already integrated into most companies' risk frameworks.

While the Act builds upon the foundation of the NLF regulation, it leaps forward by introducing stringent measures, especially for high-risk systems. These measures include the conformity assessment process, registration, and post-monitoring phase.

Conformity assessment is mandatory before a high-risk AI system can enter the market. It can be performed either internally or externally. Self-certification is permitted for stand-alone high-risk AI systems, focusing on quality management and compliance with technical documentation. External assessment is required when internal criteria are not met or for AI systems intended for real-time biometric identification.

Considering the growing focus on refining compliance criteria, ISO has introduced its standards ISO/IEC 42001 for establishing AI management systems (AIMS) within an organization. Although these standards are voluntary and not legally binding, compliance with them can enhance the likelihood of meeting the conformity assessment requirements mandated by the Act. With ongoing developments and anticipated changes, organizations have the chance to contribute to shaping these standards, hence, it is advisable for them to closely monitor the activities of CEN and CENELEC to stay informed about the publication of harmonized standards in the Official Journal of the European Union.

Non-compliance with the obligations for high-risk systems could result in hefty administrative fines, potentially reaching € 15,000,000 or up to 3% of their total global annual turnover for the preceding financial year, whichever amount is higher.

While financial institutions with robust internal frameworks already have a good foundation to comply with the new regulation, they may still be affected by the new AI regulation, especially in providing pre-market conformity assessment, updating their internal governance, risk management, and validation procedure to ensure alignment with the specific requirements outlined in the Act, registration of the system in a new EU database which still needs to be established by the European Commission, and establishing a post-marketing process to ensure an ethical application of the system.

Implications for financial institutions

The purpose of the AI act is ultimately to set up a robust risk management framework when an organization employs AI systems. This becomes especially apparent when assessing the requirements for AI systems that fall into the high-risk category.

As mentioned previously, demonstrability is a big theme in these requirements. This starts on the data side making sure the data is unbiased and qualitatively sound. This continues into the model development itself with the logging of activities to ensure the results are traceable and reproducible. After deployment, information should also be given to the user for transparency. Surrounding the full model development and deployment cycle, adequate risk assessment and mitigation systems should be in place. Additionally, the organization should also produce full documentation on the system and its purpose.

Demonstrability is also a key theme in the conformity assessment of high-risk AI systems. Internal conformity assessments have not fully been defined, but in the AIA it is mentioned that the organization should verify the quality management of the system is compliant. The conformity assessment puts a strong emphasis on technical documentation. It mentions that the organization should examine the information in the technical documentation for high-risk system requirements, and make sure the design and development process and its post-market monitoring of the system should be consistent with it.  

For institutions with a strong risk control framework, these requirements are largely covered already. If an organization has controls in place for the quality of data and producing documentation on the model at its development activities, as well as a framework for their risk management, the requirements of employing a high-risk AI system will not introduce much extra work.

That said, there is still some uncertainty surrounding the AIA. The harmonized standards that follow have not been published yet. This means the exact classification of risk levels and the exact requirements corresponding to these levels are not concrete.

The act was officially approved by the European Parliament earlier this year, and starting in 2026, it will be mandatory to comply with the requirements for the high-risk category. Due to unpredictability surrounding the rapid evaluation in the AI-driven environment, more changes and especially guidance are expected to formalize the first-ever legally binding legislative framework to ensure trustworthy AI models.

To summarize, for financial institutions with strong model risk frameworks, it is not likely that there will be a substantial additional burden by AIA. However, it is recommended to watch the coming harmonized standards and publications in the official EU journal.