Warsaw, 27th December 2023
Joint Association Letter on Artificial Intelligence Act
We, the undersigned organizations, appreciate the opportunity to share our perspective on the upcoming crucial negotiations during the fifth round of trilogues on the Artificial Intelligence Act (AIA). We would also like to express our gratitude to EU policymakers for their work on AIA regulation – our community supports its overarching goals of increasing Europeans’ trust in artificial intelligence and advancing innovations created within EU Member States. However, as the negotiations are nearing their end, we remain concerned about the new proposals that undermine the original risk-based approach by introducing a multi-tiered approach.
We strongly encourage trilogue negotiators to avoid asymmetric regulations and a multi-tier framework towards foundation models (FM) and general-purpose AI (GPAI). Asymmetric obligations inappropriately target only selected vendors and certain FM and GPAI models, irrespective of actual risk or use case of a given AI application. As the Computer & Communications Industry Associations (CCIA Europe) points out, arbitrary classification based on size criteria, such as the number of FM and GPAI users and the amount of computing data used to train them, can effectively stop companies from further expanding and scaling their services. Furthermore, attempts to add new copyright requirements in the AI Act despite the existing comprehensive EU copyright framework will only create additional complexity and should not be addressed in the AI Act, which is part of EU product safety legislation. These changes would run counter to AIA’s original risk-based approach, which has been a result of a comprehensive consultation process and struck a fine balance between enabling innovation and protecting users.
Furthermore, we are also concerned about proposals to expand the list of high-risk system use cases in Annex III and the list of prohibitions in Article 5. We strongly encourage negotiators to clarify that the latest Article 6 and Annex III compromise text does not not label all profiling systems as high-risk, but only limits the scope of the exemption from the high-risk classification. Without such a clarification, the existing language could potentially result in an excessively wide high-risk classification in Annex III. Associating profiling with a broad negative impact on fundamental rights would also contradict the GDPR. As a result, positive use cases that benefit users – such as increasing accessibility or identifying biased data sets – could be banned.
GDPR. As a result, positive use cases that benefit users – such as increasing accessibility or identifying biased data sets – could be banned.
In brief, while we support the overarching goals of the AIA, the current form of the legislation may hinder the emergence of innovation and impede the development of artificial intelligence in the EU, leading to decreasing competitiveness of Europe in times of rising global instabilities.
o Center for Data Innovation
o Computer & Communications Industry Association Europe
o Danish Entrepreneurs
o Digital Poland
o European Enterprise Alliance
o European Union and International Relations of Infobalt
o NL Digital
o Union of Entrepreneurs and Employers Poland