AI Regulation in the EU: A Revolution for Algorithms
With AI on the rise, the EU has intensified its regulatory efforts to ensure better conditions for the development and use of this potentially world-changing technology. Łukasz Łyczko and Konrad Frąckowiak, a senior manager and senior associate, respectively, at PwC Legal Poland analyse the implications.
Łukasz Łyczko
Ranked in Chambers FintechKonrad Frąckowiak
View firm profileThe European Parliament has indicated that AI has immense potential to bring economic growth, social progress and enhance innovation and overall global growth. This potential comes, however, with the need for effective regulation. Even though such regulation is still in the legislative process it has already been identified as challenging and raises numerous doubts. Much like every other piece of technology-related regulation, the “newtech” discussion also seems to require the insight of lawyers – in fact, it seems to be creating a whole new area for legal advice.
Key Legislative Discussions Around AI
One of the most important discussions held at the EU level is in regard to the definition of an AI system that puts all the necessary solutions/systems in scope without affecting others. The biggest legislative challenge now seems to be how to propose a technology-agnostic definition without having a negative impact on those technological solutions that are, in fact, far away from classic AI technology. Those discussions are ongoing, and with the recent popularity of generative AI, it seems that the market will face another approach to AI system definition before the AI Act is final.
The final result of the legislative work with respect to the definition of AI should be considered as one of the key legal points to be interpreted in the future. In fact, it will define the scope of applicability of the whole AI Act.
“The regulatory requirements for AI systems must be followed by any entrepreneur engaged in using or providing AI-related services.”
An additional point is that the EU Commission decided to use a risk-based approach, grouping AI systems based on the risk that they may create when implemented. Based on this concept, legal compliance requirements will depend on the risk classification of the given solution. It must also be stressed that the EU plan indicates that regulation should also provide for a list of certain solutions the use of which should be prohibited.
According to the regulation, high-risk AI systems may include, in particular, AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts. Also potentially high risk are AI systems that fall into specific areas (eg, management and operation of critical infrastructure, education and vocational training; access to and enjoyment of essential private services; and public services and benefits).
Such AI systems will be subject to strict compliance requirements, including transparency, explainability and accountability rules (eg, ensuring human oversight and explicit provision of information to users) and in some cases will require registration and will face public and private oversight. The list of ideas for regulatory requirements for AI systems is quite demanding and also subject to further legislative discussion. Such discussion must be followed by any entrepreneur engaged in using or providing AI-related services.
AI’s Business Impact
Why are legal changes in the area of AI a more impactful subject than commonly thought? Currently, most industries use some kind of AI in their daily activities, either in direct operations or as part of functions performed by external suppliers on an outsourcing basis, or even through supply chains. AI can also potentially be used (even unconsciously) in the area of personal data protection or data privacy. In practice, almost every entrepreneur, especially in the area of financial services, may today be potentially covered by the changes introduced by the AI Act.
Although not every entity will have to carry out a complicated process of implementing the AI Act into internal regulations or contracts with external suppliers, every entrepreneur who, for example, makes meaningful use of outsourcing or who engages in, among other business areas, financial services should conduct a gap analysis to verify the extent to which the AI Act may affect its business solutions. The next step should be a due diligence (in-depth review) of existing AI solutions. This will help facilitate the subsequent further implementation of the AI Act (regardless of its final content).
Finally, every entity that is even remotely impacted by the AI Act (or which could potentially be impacted by the AI Act in the future) should consider implementing an appropriate AI policy to its internal regulations. This could help facilitate AI-based solutions in the future – even if the scope of implementation may be insignificant now, the development of an entity’s business solutions might make doing this an essential step for the future.
AI: A Whole New Compliance Area
The AI Act is still a developing project in many areas. The enormous regulatory challenge faced by European lawmakers translates into constantly changing legal institutions and definitions in the content of the act. In addition, the entities obliged to implement the AI Act will soon have to deal with plenty of regulatory grey areas – ie, in the field of AI systems compliance or even the very definition of the terms AI and AI system.
At the same time, the AI Act is likely terms of sanctions, including financial ones, that may be imposed on entities obliged to apply it. It is not without reason that the AI Act is now called “GDPR for algorithms”. Especially in the area of financial services, we may soon expect an avalanche of binding interpretations, guidelines and position notices of EU-based regulatory authorities - each of them a helpful guide, but at the same time also raising the threat of possible non-compliance.
“Every entity that is even remotely impacted by the AI Act (or which could potentially be impacted by the AI Act in the future) should consider implementing an appropriate AI policy to its internal regulations.”
What does this mean in practice? Potentially, in a few years, as was the case after the first AML regulations came into force, we will have a completely new field of legal practice: AI law.
What Lies Ahead for EU AI Regulation?
The next steps in implementation should be based on the final version of the text of the AI Act. The current version of the AI Act provides that it should be binding within two years upon its publication. This may seem like a lot of time for preparation but the clock is already ticking. At this point, it is crucial to conduct appropriate due diligence to systematise existing solutions in the area of AI, and to start work on preparing a gap analysis of an organisation to avoid building compliance in this area.
Will the AI Act be a "future-proof" regulation, as indicated by the European Union, or will we face a constant series of amendments? AI is, after all – as stated by the EU - a fast evolving technology. It seems that only time will tell just how “tameable” artificial intelligence is.
PwC Legal
4 ranked departments and 10 ranked lawyers
Find out more about the firm's ranking in Chambers Global
View firm profile