The upcoming EU AI Act aims to regulate AI systems used within the European Union across sectors. As a critical infrastructure, the financial services industry will be significantly impacted by these new regulations.
Some key aspects of the Act relevant for financial services include:
High-risk Systems AI systems used for creditworthiness assessments and credit scoring of
individuals will be classified as high-risk. Financial services firms developing or using such systems will face greater obligations around risk
management, accuracy, human oversight, transparency and other
requirements. Exemptions apply for providers only offering services for
their own use when scale of impact is limited.
Detection of Fraud
Notably, AI systems provided for under Union law to detect financial fraud
will not be considered high-risk under the AI Act. This carve-out enables
ongoing innovation by financial institutions in leveraging AI for fraud
prevention without added compliance burden of high-risk systems.
Essential Services
The Act covers AI systems determining access to essential services like
housing, electricity and internet. As providers of capital enabling access to
such services, financial services AI again warrants prudent governance.
Housing finance lenders, fintech platforms or even systems evaluating
eligibility for basic bank accounts could fall under stringent oversight.
Consumer Creditworthiness
In particular, systems evaluating consumer creditworthiness for lending
decisions can perpetuate discrimination based on ethnicity, disabilities or
other protected attributes. High-risk classification necessitates financial
institutions carefully audit these systems for unfair bias, enhance
transparency and institute human oversight procedures. Service providers
in areas like credit reporting will also need to ensure models account for
fairness and consumer protection concerns.
Life and Health Insurance
AI systems influencing life and health insurance policy decisions present
similar risks around discrimination and access to healthcare. The Act
similarly designates such systems as high-risk warranting added scrutiny.
Insurance providers leveraging AI and third-parties like model developers
or data providers will need to ensure compliance.
Governance for Providers and Suppliers
As AI systems typically comprise multiple components supplied by different
vendors, financial institutions procuring models, data, and tools from third-
parties will need contractual clarity on compliance obligations. Component
suppliers that provide insufficient transparency or pose unreasonable risks
could undermine lead providers’ overall compliance.
Environmental Impact
Technical documentation around high-risk systems must also include
details on energy consumption and environmental impacts. Increasing
transparency on resource usage such as for computationally intensive AI
training could inform more sustainable data center management and
procurement strategies for financial services firms.
While promoting innovation, the EU AI Act’s added oversight protects
consumers and infrastructure stability for high-impact systems. As pioneers
in AI adoption, financial institutions must proactively govern risk and align
suppliers while also seizing carve-outs for fraud detection. Though
compliance brings costs, prudent frameworks also strengthen long-term
trust in AI advancement across the sector. The Act brings Europe to the
forefront globally in balancing emerging technology’s opportunities and
challenges.
Comments