February 18 2022
ZPP’s contribution to consultations on Artificial Intelligence Act launched by MEP Axel Voss
The Union of Entrepreneurs and Employers welcomes the consultations on the Artificial Intelligence Act (AIA) organized by MEP Axel Voss. Below we present the most pertinent issues which, in our opinion, are key to unlocking the potential of the European digital economy in the years to come.
- What is the best definition of AI?
In our view, the definition currently proposed under the AIA is too broad. If enacted, AIA would cover a range of solutions that from the perspective of industrial and commercial practice do not constitute Artificial Intelligence (AI). For instance, the Annex I point (a) lists machine learning methods, while Annex I point (c) includes statistical approaches, Bayesian estimation, as well as search and optimization methods. If enacted in this wording, AIA would classify virtually any algorithm, optimization method or statistical calculation as AI. Therefore, in the view of ZPP, it is of paramount importance to omit Annex I point (c) from the final version of the regulation.
- What encompasses high-risk?
ZPP has participated in the consultation process of AIA since the begging and has consequently advocated for the adoption of risk-based approach. We welcome the Commission’s proposal to the imposition of mandatory requirements. In our view by adopting a proportional, risk-based approach the Commission found a good balance between maintaining scope for innovation and protecting citizens.
At the same time, we believe that the provisions of AIA need more clarification. Areas that need more fine-tuning include the differentiation of responsibilities between AI actors in the value chain and specific requirements for high-risk uses of AI.
- How to combine it with ethical standards?
In our opinion, the Charter of Fundamental Rights of the EU, as well as aquis communaitaire, constitute primary sources of ethical standards in the EU and are as such recognized as binding. Therefore, any limitation to the use of AI should be based on a potential infringement of rules, which form community law already today. Widening this group of sources would create risks to the coherence of the EU legal framework, and risk decreasing legal certainty.
- How can we make sure the AI governance approach works?
We have formulated a number of specific recommendations with a view to implementation and enforcement of AIA.
First, AIA should clarify the balance of responsibilities between AI providers, deployers and users. Particular attention should be paid to the question of responsibilities of AI users as deployers, and the responsibilities of providers to their customers. Currently, AIA does not provide a definition of a deployer. In our view, inclusion of a definition as an entity making the AI available to users in a specific situation would increase coherence and clarity of the overall regulatory framework.
Second, the success of AIA depends on whether the requirements are reasonable and feasible. In order to achieve that, language around certain provisions needs to be revised in order not to set an impossible standard. For instance, the requirements imposed for high-risk AI are in principle proportionate. Nevertheless, the language of the provisions should be revised to make sure that the provisions can be applied in practice. An example of an obligation, which is impossible to implement in practice, is Art. 10(3) stating that “Training, validation and testing data sets shall be relevant, representative, free of errors and complete.” While the goal is right, it is impossible to guarantee this in practice. Moreover, certain techniques aiming at improving users’ privacy deliberately introduce error (noise) to datasets.
In a similar vein, AIA should avoid introducing disproportionate requirements. One such example include Art. 64(a), which states that “…upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system.” On the one hand, this provision is contrary the EU Trade Secrets Directive. On the other hand, there are less intrusive yet effective means to verify performance of an AI system. Therefore, in our opinion it would be beneficial to change this provision in order to obligate AI providers and deployers to effectively support market surveillance authorities to carry our robust testing (input/output audition).