European AI Act risk-based approach AI systems AI developers governance

The long-awaited EU AI Regulation is now final (published in June 2024). What a challenge for the EU ! Finding the right balance between, on the one hand, fostering technological innovation and, on the other hand, safeguarding the fundamental rights, safety, and health of human beings, is a delicate exercise. Two key drivers command these new provisions: a product safety process and a risk-based approach. Some AI systems do not harm; some others require more transparency. The core concept relates to high-risk AI systems generating several obligations for both the provider and the deployer in terms of governance, quality, human oversight, product safety process (post-monitoring, reporting incidents…). Additional documentation must be delivered by the provider of a GPAI model to be integrated in an AI system taking into consideration the respect of the IP rights. GenAI is a revolution in terms of data browsing and content creation capability. Two years is not that long a time to implement all these requirements, especially in the multi-sectors industry. Sanctions follow the well-known regime: a fine or a % of the global turnover. The level II and, especially, the harmonization standards to operationalize the requirements of the AI ACT are awaited with great interest and will, hopefully, answer remaining open questions at this stage and, eventually, clarify some concepts and interpretations.

1. INTRODUCTION

Artificial intelligence is not new. On 11th May 1997, Garry Kasparov, a Russian worldwide chess grandmaster lost the game against “Deep Blue”, a machine-based on AI set up by IBM, which calculated 200 million chess moves per second. Over the following years, AI has continued to evolve relatively silently. IBM came again in the picture in 2010 with “Watson”. The next step was achieved with Alexa and the language recognition in 2016. In 2022, it is ChatGPT (Open AI), and then Copilot (Microsoft) with the creation of images, videos, music which penetrated the market. The disruptive revolution of Generative AI became obvious and has undergone an exponential development in the last years. Meanwhile, several incidents were reported in the press about the risks linked to algorithms.

2. LEGISLATIVE PROCESS

In February 2020, the European Commission published a White Paper on AI and proposed to set up a European Regulatory Framework for trustworthy AI. In April 2021, the European Commission tabled a proposal for a new AI Act. The adoption of the Regulation by the European Parliament was voted in March 2024 and approved by the Council on 21st May. The text should be published in the Official Journal of the EU at the beginning of June and will enter into force 20 days later. Regulatory Technical Standards (RTS) and Implementing Technical Standards (ITS) are awaited as per level II, together with the European harmonised standards which should operationalise the AI Act requirements relating to AI high-risk systems.

3. APPLICABILITY

The AI Act being an EU Regulation, it directly applies without implementation adjustments by the Member States in their local legislation. Within six months following the Regulation will have entered into force, i.e., in December 2024, the prohibited AI systems must be phased out. Twelve months later, in June 2025, the obligations related to General Purpose AI governance will have to be respected. In June 2026, all provisions of the AI Act including the obligations for high-risk systems as per Annex III will become applicable. They will also apply to existing high-risk systems if on the date of the application the systems are subject to significant or substantial changes in their design or intended purpose. In June 2027, the obligations of high-risk systems defined in Annex I (Annex II in the former version: list of the Union harmonisation legislation covering a diversified range of industrial sectors) will apply. 2030 would be the final deadline to be respected by enforcement authorities using high-risk systems.

4. EXTRA-TERRITORIAL APPLICATION

Regulations with an extra-territorial impact are not so frequently issued. We are familiar with foreign legislations having an effect in the EU, like the UK Bribery Act, FATCA, OFAC… The AI Act applies to both suppliers located in and outside the EU when deployers are established in the EU and the output of the system is used in the EU. The question is whether the AI Act applies to a US Bank which performs a creditworthiness analysis based on AI for a EU resident. The response is most probably affirmative. However, for the time being, it could be assumed that the AI Act might not apply in case of reverse solicitation.

5. RATIO LEGIS AND APPROACH OF THE EU LAWMAKER

AI should serve as a tool for people, with the ultimate aim of increasing human-being welfare thanks the development of innovation. The idea is to bring the human back in the middle of the picture (human oversight) and uses of algorithms for the welfare of the society avoiding their misuses and breaches of fundamental rights.

The approach is risk based and product safety driven. Features and qualities of a given product are defined by a set of rules, a market surveillance is organised, a conformity assessment is foreseen in certain cases and a CE marking is required. The scope is broad. An algorithm is however not a product like another one. Hence, there is sometimes few other possibilities than affixing the CE marking on the informative documentation provided.

The AI Act defines the systems in scope. It is the first horizontal multi-sectors application which complements the laws related to products safety and liability. In parallel, the Directive on the defective products extra-contractual liability is under reform and a specific one related to AI is on its way. These will entail, in certain cases, rebuttable presumptions in case of damages. The AI Act aims at identifying the risks and to address them in a targeted and proportional way with the intention to achieve a level playing field. As a result, a trustworthy and human centric AI is generated ensuring a high-level protection of safety, health and fundamental rights of persons living in the EU.

6. SCOPE 

  • Definition of AI system

    An AI system is a “machine-based system designated to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that, for implicit or explicit objectives, infers from the input it receives how to generate outputs, such as predictions, content, recommendations, or decisions that can influence physical or virtual environment”. The system is based either on machine learning, on logic and knowledge-based approaches or on statistical approaches. It is thus a digital system where the independence from human involvement is necessary but whose levels may vary. Self-learning is in principle captured and the system can change while in use. However, it is clearly not a simple traditional software  based on rules defined by individuals to automatically execute operations, like f.e. EUC’s. No need to say that the quality of the data ingested in such a configuration is essential.

  • Main actors

    The provider and the deployer are the key stakeholders. The first one develops the AI system and places it in the market or puts the system into service by supplying it to a deployer or using it himself. The deployer uses the AI system for professional purposes and the output is used in the EU. The distinction between the two is not always clear-cut. It could be that the deployer becomes a provider if he modifies the intended purpose of the system, including a General Purpose one, in which case the original provider is no longer considered to be provider. This happens e.g. when the distributor or importer puts his brand on a high-risk system.

7. RISK-BASED APPROACH

The AI Act makes a distinction between several categories of systems. Some are considered as presenting an unacceptable risk because they violate the fundamental rights and values. Per definition, they infringe the rights of natural persons and harm them. They are prohibited.

The heart of the risk-based approach lies within the high-risk category. These systems impact health, safety or fundamental rights and require complying with a number of obligations both on the provider and on the deployer sides (risk management systems, conformity assessment, post-market monitoring etc.).

There are also systems which entail or increase the risks of impersonation, manipulation, deception. It is referred in this class of systems to chatbots and deepfakes which require an enhanced transparency duty to ensure that the user knows he is either talking to an AI robot or is confronted with AI manipulated content. The question is whether this would be sufficient once fraudsters plan to exploit the vulnerability of users not being the most numeric acquainted persons. The Commission will most probably produce Codes of best Practice for these regulated non-high-risk systems.

Most of the remaining systems which are also the vast majority are considered as common AI systems which entail only minimal risks, like spam’s, filters, to which no specific requirements apply.

The AI Act contains a whole chapter on General Purpose AI (GPAI) models to be integrated in AI systems and to which transparency requirements apply, including additional risk assessments and mitigating measures should they be of systemic nature.

8. PROHIBITED AI SYSTEMS

Are prohibited AI systems using subliminal techniques to exploit vulnerabilities so that a significant harm is caused or to materially distort behaviors by impairing the person’s ability to make an informed decision in such a way that the decision taken would never have been taken otherwise (e.g., manipulating minors to take a loan). This consideration is somehow echoing the unfair market practices provisions.

Under reserve of some exceptions, f.e., when a magistrate’s authorization is granted in a specific context where public interest is at stake, the real-time remote biometric identification in publicly accessible spaces for law enforcement is prohibited due to freedom limitation. Similarly, predictions of the risk that a person commits a criminal offense, based on the profiling/assessment of her personality is also forbidden except to predict frauds.

The right to personal integrity is also recognized by preventing the creation of facial recognition databases through untargeted scraping of facial images from the internet.

The biometric categorization to infer race, political opinions, sex orientations or a social scoring leading to detrimental or unjustified treatment would infringe the non-discrimination principles and are therefore not allowed.

The right to work is also protected by preventing in principle to infer emotions at the workplace or educational institutions.

9. HIGH RISK SYSTEMS

A distinction has to be made between Annex I and Annex III; Annex I lists products covered by safety EU laws where a conformity assessment is required and where the AI system may constitute the product itself or be a component of the product to which harmonized legislation applies. A broad range of products are concerned (machinery, safety of toys, watercrafts, lifts, medical devices…).

Annex III is certainly the most relevant one for banks and insurance companies. Based on the intended purpose, the current list is composed of interesting processes which can be already used by the financial sector.

It is worthwhile to note that f.e. remote biometric identification is considered as high-risk as opposed to the verification of the identity for which AI tools are already in use in several Financial Institutions due to the increase of the digital finance and the number of remote onboardings.

Without surprise, recruitment, promotions, performance monitoring or termination of working relationships are included in the high-risk category due to the impact these decisions may have on the future career of an individual.

More important is the credit scoring or creditworthiness assessment of natural persons (often based a.o. on the probability of default) or the pricing of life/health insurance. The high-risk impact appraisal might very much be related to the criteria used to make the evaluation and should for example not lead to systematic financial exclusion of certain categories of individuals.

According to the Recitals, calculating capital requirements for a credit institution is not considered as being high-risk. Nor are analyses of information as per AML law through AI systems by tax authorities or FIU’s. However, the situation is different for AML enforcement authorities or even for judges assisted in their interpretation of the law. AI can support decision making but cannot replace it. Financial Institutions are not enforcement authorities. They are not entrusted to exercise public authority. Hence, the AI systems used based on AML models do not appear to be high-risk as long as they are not sold to or used by law enforcement authorities.

One would also emphasize that the list of Annex III does not constitute additional legal ground to process data. In this respect, the GDPR and related privacy provisions remain applicable under reserve of a conditional exception (see infra 10.B. (art. 10. 5 of the Act)). It is worth mentioning that the EU Commission is entitled to regularly review and update this list.

Moving on to possible exceptions, an high-risk system might not be qualified as such by the provider if the system only covers narrow procedural tasks (e.g. a system classifying documents or avoiding duplicates) or if it improves the result of  a human activity (e.g. it aims to improve the language). The same statement can be made if the system detects patterns in existing decisions, or if it relates to a preparatory task (file handling with indexes, searches functions, translations possibilities). The provider who estimates that his system is not high-risk must document this assertion but nevertheless a registration in the EU Database of the Commission must take place. An exception can never apply if the AI system performs a profiling of natural persons. Guidance from the Commission is awaited as well as Delegated Acts.

10. OBLIGATIONS OF PROVIDERS OF AI HIGH-RISK SYSTEMS

Obligations are ranged under 4 main items: governance, quality of AI systems, human oversight and product safety process.

  • Governance

    The organization of the provider helps to make sure that the AI system is constantly monitored during the whole lifecycle through a robust Risk Management system (policies, instructions, quality controls, data management…). Management and staff are accountable for the obligations of the provider. Except for post-market monitoring and reporting of incidents which are new and specific to AI, the pre-cited duties are usually included in the obligations of the financial institutions subject to the Single Supervisory Mechanism and in principle deemed to be compliant, but the European harmonized standards which will operationalize the AI Act will need to be considered. When an AI-system is not in conformity with requirements, the provider needs to take immediate remedial actions or withdraw the system from the market.

    The provider must also identify and mitigate the “reasonably foreseeable risks” linked to the use of the system by the deployer and test it before putting it on the market so that the best mitigating measures are determined.

    Record keeping (logging) of events must be foreseen and kept for 6 months (or according to financial laws). Treacability of AI systems must be ascertained including for post monitoring.

  • Quality of the AI system

    The quality of the data for training purposes and validation testing is vital. Under certain conditions (no re-use, deletion, necessity), even GDPR sensitive data can be used to detect biases and correct them. The technical documentation (Annex IV) must be up to date and describe the characteristics, output, architecture, human oversight, labelling, hardware resources, maintenance measures… the system requires. Documentation must be kept for 10 years.

    Full transparency and instructions from the providers to deployers on how they can use the AI system and interpret its output is necessary. These instructions must describe the system, its purpose, its foreseeable misuse, the specifications of input data, the type of human oversight, the interpretation of the results,…

    Regarding the accuracy, robustness and cybersecurity measures, the EU Commission will produce benchmarks. This goes much further than the GDPR current provisions relating thereto. Resilience for errors, faults, inconsistencies and to third parties’ attacks is expected.

  • Human Oversight

    Humans need to be able to oversee AI systems. Therefore, human-machine interface tools need to be delivered. The oversight must be proportionate to the level of autonomy and to the circumstances, so that a person can address abnormalities, dysfunction, automation bias, correct interpretation of the output, and even override the AI system with a “kill switch button”. Within 15 days, serious incidents must be reported to the authorities of the Member State where the incident occurred.

  • Product Safety Process

    A conformity assessment must be carried out against harmonized standards which are not available yet. The assessment can be based on internal controls or with a conformity assessment body, an independent third-party assessment body notified to the authorities. A EU declaration of conformity the content of which is to be found in Annex V must be produced per AI high-risk system and kept for 10 years. As mentioned above, the CE Marking of conformity must be affixed on the AI system or the accompanying documentation. The AI system being high-risk must also be registered in the EU Database of the Commission in plain language and in a machine-readable way, including when the AI system benefits from the exemption of high-risk qualification.

11. OBLIGATIONS OF DEPLOYERS OF AI HIGH-RISK SYSTEMS

The same aspects as those stated for providers are highlighted:  governance, quality of AI systems, human oversight within a product safety process.

Next to the monitoring of the operation of the high-risk systems, there is a new key requirement which is to inform providers about post-market evolutions and incidents. The input data must be sufficiently representative; human oversight must be ensured, and the instructions of the provider respected.

By complying with the rules on financial services, Financial Institutions usually already performed a monitoring. The question is whether it will be sufficiently tailored to AI and how far it might need to be adjusted. Should Financial Institutions deploy high-risk systems, they should also carry out a fundamental rights impact assessment before putting the system into use (art. 27). This is a key requirement.

12. GENERAL PURPOSE AI SYSTEM 

The Regulation defines General Purpose Artificial Intelligence models (GPAI) that are trained with a large amount of data using self-supervision at scale, that display ‘significant generality’ and are ‘capable to competently perform a wide range of distinct tasks’ and ‘can be integrated into a variety of downstream systems or applications.’ The AI Act defines GPAI systems as systems based on GPAI model, which have the capacity  to serve  a variety of purposes, both for direct use as well as integration  in other AI systems (art.3 (66) of the AI Act). A GPAI model is qualified as “systemic risk” if it has high impact capabilities (based on Floating Point Operation/Computer capacity) supported by factual evidence and presumptions subject to a decision of the EU Commission. Codes of Practice will be set up. A presumption of conformity will derive from the respect of these upcoming Codes which are however not mandatory.

Generative AI is not defined but appears to be presented as a sub-category of GPAI or one of the best examples being a flexible generation of content that can accommodate a large range of tasks.

13. OBLIGATIONS FOR PROVIDERS OF GENERAL-PURPOSE AI MODELS

Technical documentation has to be prepared (training and testing process, results of evaluations, list of information as per Annex XI) and to be provided to the AI Office and the NCA. This is not applicable to free and open license models except if they are of a systemic risk nature.

In addition, documentation to providers of AI systems who intend to integrate the model in their systems must be provided within the respect of the IP rights and confidential information. Providers must have a good understanding of the model and a copyright Policy, including a reservation of rights (art.4(3) of the EU Directive 2019/790) must be drafted. The same exception as mentioned in the above paragraph applies under reserve of systemic risk models.

The documentation must be made public with a detailed summary of the content used in the pre-training of the GPAI model. Copyrights holders may enforce their rights. Large generative models require a vast amount of data to be trained. Texts and data mining are used to detect content which might be protected by copyrights. These will certainly be aspects which will require additional guidelines.

In case of systemic risk when there is a high impact on the market due to its reach and foreseeable negative effects on health, safety, security, fundamental rights propagated at a scale across the value chain, an evaluation should be done on the basis of standardized protocols of the model to identify the risks which need even more to be mitigated should they occur at the Union level. A report to the AI Office and to NCA should be drafted about serious accidents and corrective measures. An adequate level of cybersecurity is also requested. Again here, Codes of Practice with updated information, measures to mitigate the risky, objectives and KPI’s to be monitored by the AI Office should be in place.

14. REGULATORY SANDBOX

A regulatory sandbox must be foreseen at national level in each Member State, amongst others to prioritize accessibility to SME’s, start-ups, and to foster innovation in the pre-marketing phase.

In order to train and test AI systems before putting them on the market, the AI Act refers to a regulatory sandbox to facilitate developments in a controlled environment. Having said that, it is also possible for the provider to test the system in a proprietary environment under real conditions outside the sandbox. Data usage must in this case be based on the GDPR consent definition.

The testing plan must be submitted to the Market Surveillance Authority and registered in the EU Database.

15. MAIN USES OF AI BY THE FINANCIAL SECTOR IN THE US [1]

Most large Financial Institutions in the US seem to still be in the experiment phase rather than in the development one and, a fortiori, in the industrialization phase. Human oversight and balancing risks with advantages are strongly emphasized.

A few examples of added value in terms of efficiency gains can be cited as illustration:  using AI models to analyse central banks statements and predict monetary Policy, to optimize portfolio, to pre-prepare financial statements, to screen CV’s, to improve AML transactions monitoring, to increase voice assistants’ capabilities, to perform fraud analysis, to automate reporting, to make gap analysis when legislation change. As we can see, most of these examples do not refer to high-risk systems. Prompting improvement and Large Language Models are largely used. The more contextual and precise the questions are, the better the results that can be obtained. Retrieval Augmentation Generation (RAG), a more specific method taking amongst others into consideration the specificities of financial products, is also focused on.

16. AUTHORITIES: GOVERNANCE

  • At EU level

    The AI Act grants an important future role to the AI Office (delegated body of the Commission) to develop Standards and oversight GPAI models and to help in the implementation of the new provisions in the Member States. The AI Board will support the implementation of the AI through Codes of Conduct for GPAI. A panel of independent experts will support the AI Office while an Advisory Forum composed of representatives of both the industry and the civil society will provide technical expertise to the AI Board.

  • At national level

    One notifying Authority and one Market Surveillance Authority will be designated per Member State. The Competent Authority will be charged with oversight and enforcement tasks, such as: monitoring of notified bodies performing conformity assessments, post-market monitoring and incident reporting, establishment of the sandbox and application of sanctions.

    Regarding Annex I, there is a specific dedicated Authority per sector. As far as Annex III is concerned, if the user or the provider is a Financial Institution, subject to a national exception (art. 74(6)), it is permitted to legitimately assume at this stage that the National Bank of Belgium would be the Market Surveillance Authority which in its turn would report to the ECB for credit institutions being part of the SSM.

    Should the high-risk systems be used by Authorities for enforcement purposes, it is envisaged to appoint the Data Protection Authorities.

    For General Purpose AI model, the Commission will in principle be in charge of enforcement, acting through the AI Office.

17. SANCTIONS

The use of prohibited systems is sanctioned by a fine of EUR 35 Mio or 7% of the global turnover.

The sanction of the violation of obligations linked to high-risk AI systems as well as Generative Purpose Models is about EUR 15 Mio or 3% of the global turnover.

Incorrect information supplied when required would lead to EUR 7,5 Mio or 1,5% of the global turnover

18. CONCLUSIONS

The developments we observe in terms of GenAI are worthwhile exploring. Above all a trustworthy AI framework is recommended. In a moving environment, a robust governance process is highly advisable, both in and outside the existing processes related to products Financial Institutions are familiar with. Training and communication on available tools are part of change management. Proper risks assessments including on data breaches increases and cyber security, in particular in the light of the digital (data) framework but also in the Open Finance context and its future developments would allow to anticipate the appropriate actions to take and to be ready for the next steps. Presumably, NIS and DORA must not be forgotten. Copyrights issues might need to be looked at on a broader scale.  At this stage, many questions remain open or are premature. Smaller non-financial entities, not being subject to existing heavily regulated risk and security management frameworks nowadays, will have to deploy significant efforts to comply with minimum requirements.  All stakeholders are looking forward to the European harmonized standards, the Regulatory Technical and Implementing Standards, the Guidelines and Codes of Best Practices to fine-tune their approach whether they are providers or deployers.

[1] Source OCBF, webinar Patrick Bucket & Mathilde Testard, Cap Gemini, “Generative AI for Financial Institutions”, April 3rd 2024

Auteurs

01 BFWD 2024 6 Marie France De Pover

Marie-France De Pover

General manager KBC Group Compliance