Financial Regulators Including the Swiss Supervisory Authority Are Increasingly Focusing on AI in the Financial Sector
Dr Dirk Spacek, partner and co-head of the practice groups TMC and IP, and Dr Vaïk Müller, partner and head of banking & finance, Geneva, both of CMS Switzerland, discuss the guidance on prudently implementing AI technology within financial services.
Dr Dirk Spacek
View firm profileDr Vaïk Müller
View firm profileOn 30 May 2024, the European Securities and Markets Authority (ESMA) issued a comprehensive statement on the use of artificial intelligence (AI) in the provision of retail investment services. ESMA acknowledges the transformative potential of AI in enhancing efficiency, innovation, and decision-making for investment services. However, the statement also underscores the associated risks, including algorithmic biases, data quality issues and transparency concerns that come with the adoption of AI technologies.
ESMA’s announcement outlines the diverse applications of AI in the field of investment services, ranging from customer service and support to compliance, risk management, fraud detection, and operational efficiency. For example, with regard to investment services, AI tools could be used by firms to analyse a client’s knowledge and experience, financial situation (including risk tolerance), and investment objectives (including sustainability preferences) to provide personalised investment recommendations. As AI can process vast amounts of financial data, this could also be used to forecast market developments, and identify investment strategies and potential investment opportunities.
Associated risks and relevant remediation
The risks related to the use of AI systems are addressed in detail in ESMA’s statement, including challenges such as lack of accountability, transparency, data privacy concerns, and algorithmic biases. ESMA provides guidance on organisational requirements, governance structures, risk management frameworks, knowledge, and staff training essential for the effective integration of AI into investment services. The importance of data integrity, algorithm testing, and adherence to regulatory standards are highlighted as key elements in mitigating risks associated with AI technologies. Moreover, the announcement emphasises the necessity for firms to prioritise clients’ best interests, maintain transparency about the use of AI in investment decision-making processes, and adhere to stringent conduct of business requirements under applicable regulations, especially Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on markets in financial instruments (MiFID II) in this instance. ESMA underscores the need for firms to establish robust controls, conduct thorough testing of algorithms, and ensure compliance with data protection regulations when deploying AI systems in investment services.
Although ESMA’s statement focuses on the provision of investment services to retail clients, it may also prove useful for the provision of these services to professional investors, subject to potential opting-in. This statement could also be useful for Swiss financial institutions and serve as a reminder that the implementation of AI-based solutions must first be considered from a regulatory perspective.
The identification of these risks and related remediation measures are not a new concern for supervisory authorities in and outside the EU. In Switzerland, the Swiss Financial Market Supervisory Authority (FINMA) has not been inactive until the release of the ESMA statement. In particular, it has already communicated publicly on AI in the financial sector in its Risk Monitor released in November 2023 under the section “Longer-term trends and risks” (see further information here). Based on FINMA’s own conducted survey, it observed that chatbots have become increasingly available in the financial services sector and the general interest in AI solutions has taken a further upward leap.
In this context, FINMA has first provided generic guidance to Swiss financial institutions contemplating using AI for the provision of financial services, such as:
- the implementing of clear governance and allocation of responsibilities within the organisation in view of ensuring proper controls of AI systems;
- the assessment of models in view of testing their robustness, accuracy and reliability; and
- the need for transparency and explicability of AI use and models.
The risk of algorithmic biases has also been discussed by FINMA. This stresses the central role personal data processing activities have in that respect and the necessity to assess such risks before developing a service based on AI. Needless to say, in this context that AI-based data processing also triggers implications on data privacy compliance, such issues are relevant for FINMA for general compliance reasons as well. Depending on the quantity of/reliance on AI-based data processing and the sensitivity of personal data at stake, a so-called “data privacy impact assessment” and prior consultation with the Swiss Federal Data Protection Commissioner is required separately (see Article 22 et seq of the Swiss Federal Act on Data Protection).
Conclusion
In conclusion, ESMA’s statement serves as a first foundational guidance for investment firms seeking to leverage AI technologies while maintaining regulatory compliance and client-centric practices. It also paves the way for forthcoming regulatory developments not only in the EU but also in Switzerland. It is more than likely that FINMA will keep track of the evolution of AI for investment management in the wake of its Risk Monitor released in 2023 and of new, similar official statements to ESMA’s statement.
CMS
121 ranked departments and 139 ranked lawyers
Learn more about this firm’s ranking in Chambers Global
View firm profile