"20 open points, 38 hours of marathon negotiations, and at the end there is a political handshake deal in which the small print will be decisive.
What will be the final round of negotiations on the European AI law, the so-called AI Act, on December 6th. started at 3 p.m. and on December 8th. ended shortly before midnight, is a special feature of political business in Brussels: the trialogue. Intended to speed up the legislative process, in reality it often resembles a grab bag.
The co-legislators, Parliament and member states, meet under the mediation of the Commission with the aim of finding a common compromise after the first reading. Undoubtedly, this is faster than due process. However, it becomes problematic when speed comes at the expense of transparency and quality.
This fate may now have also befallen the AI Act. The starting point of the negotiations was the aim of the Spanish Council Presidency to achieve a trilogue result within its responsibility. Added to this was the Commission's desire to be the first to reach the historic milestone of extensively regulating artificial intelligence. This mixture created unnecessary time pressure and led to nightly negotiations without a break, sometimes lasting 22 hours at a time.
Focus on the innovation part
The intensive negotiations were particularly characterized by the debates about the two major areas of the AI Act. The conflicts and compromises surrounding civil rights and banned applications deserve a separate consideration, which I will not undertake here. In this evaluation I would like to concentrate on the innovation part. The part that will be crucial for the development and use of AI in Europe.
Despite numerous acts of resistance within Parliament, the Commission and, in some cases, the Member States, some innovation-friendly successes have been achieved. Compared to the Commission's original draft from 2021, the compromise now being negotiated is far more open to innovation and entails significantly fewer bureaucratic burdens for European companies - both for AI developers and their users.
For example, it was possible to narrow down the EU Commission's broad definition of AI, which included all classic software, to the essentials. The AI Act now defines AI exactly like the OECD, which is intended to guarantee international connectivity.
Among other things, properties such as autonomy and machine learning are the focus, although sometimes only in the recitals to the law. However, the result is clear: Classic software that has nothing to do with artificial intelligence is no longer covered by the AI Act.
The risk system was also improved through lengthy negotiations. Unlike at the beginning, only those where there are actual risks to health, safety and fundamental rights are now classified as high risk. By changing the system, no longer does every language software or, for example, an AI for scheduling appointments fall into the high-risk category just because it is used in a certain critical area, such as a hospital or a power plant. This provides clarity and is an essential step in avoiding over-regulation.
AI Act must strengthen innovation
The AI Act must strengthen innovation instead of placing obstacles in the way of developers and users. Therefore, another success is that in our agreement we create the possibility of real-world laboratories in which AI developers can test their systems under real conditions in a controlled environment. This is particularly important for start-ups and small and medium-sized companies.
It is also crucial that research and development have been clearly exempted from the AI Act, as has open source AI to a certain extent.
These are just some of the innovation-friendly improvements achieved through tough negotiations over the past two years. In addition, there are numerous negative aspects that were discussed in the meantime, but fortunately could be prevented. For example, a public fundamental rights consultation that all users of AI systems in high-risk areas should have carried out. The blanket ban on the use of personal data for training AI systems was also prevented.
One of the most important successes of last week's marathon trialogue is the prevention of a blanket high-risk classification of general-purpose AI systems (so-called "General Purpose AI", or GPAI for short). As an exception to the law's otherwise risk-based approach, the regulation of general-purpose AI follows its own logic. These systems include ChatGPT, but also many smaller applications that can be used for multiple purposes, such as speech recognition software.
Instead of massively over-regulating these systems through blanket high-risk classification - the Council's proposal was on the table - the EU is now creating clear responsibility along the AI value chain. In a "burden sharing" approach, GPAI providers must share information with companies that build these systems into their own high-risk AI so that they are able to comply with the requirements of the AI Act.
This is an extremely important achievement for European companies to be able to build secure systems and not be stuck with compliance costs or be responsible for GPAI system malfunctions. Small and medium-sized companies in particular that integrate GPAI systems such as ChatGPT into their own systems receive massive regulatory relief.
Not an optimal agreement on GPAI
However, the agreement on the regulation of GPAI models, also known as foundation models, is not optimal. A majority for self-regulation, as demanded by the German federal government, among others, could not be reached in parliament. The two-stage solution now planned for GPAI models makes more sense than blanket high requirements for all models. A large number of the requirements will only apply to the powerful models, but the requirements for the lower level are too extensive and unnecessarily bureaucratic. A particular sticking point will also be the threshold above which models belong to the powerful category. The only hard criterion so far has been defined as the computing power of 10 “Floating Point Operations Per Second” (FLOP). This cannot be a sufficient stand-alone criterion, especially since a development towards more powerful, smaller models is foreseeable. Therefore, as a second option, the EU Commission created a classification system that could include various factors such as the number of parameters or the number of users. The specific design of this limit will be crucial for a meaningful and practical implementation.
The idea of a Code of Practice was added. What the Spanish Council Presidency brought into play as an alternative to self-government of GPAI models has become a big unknown in the outcome of the negotiations. As an interim solution, a Code of Practice is intended to make it easier for companies to comply with the law until standards are in place. Such a code of practice or standards are often a simpler alternative, especially for small and medium-sized companies, to comply with the requirements of a law instead of taking the more expensive route of a conformity test.
While there is a proven development process for standards involving various industries and standards organizations, it is unclear exactly how the Code of Practice should be created. If, in the end, only tech giants and industry giants come together under the coordination of the AI Office, i.e. the European Commission, start-ups and medium-sized businesses would not be of much help. In addition, this Code of Practice would also have to be developed immediately. It is still unclear whether this could actually be achieved much faster than it takes to develop standards. Companies need guidelines quickly, as the AI Act would have a comprehensive impact just two years after its passage.
The final text will be finalized at the technical level in the coming weeks. In the end, the details will be crucial. This is also a peculiarity of the trilogue: verbal political agreements are often reached that leave both sides room for interpretation. The final interpretation sovereignty in the technical and legal formulation of the laws will then be decisive as to whether a handshake deal is also followed by the necessary majorities for approval in the Council and Parliament. The AI Act will now be put through its paces by the member states and the various factions in Parliament; every word and every formulation will be fought over. And in the end it will be decided whether the EU is only a pioneer in the regulation of AI or also in innovation and civil rights." [1]
1. Handschlag-Deal zum AI Act: Am Ende entscheiden die Details. Frankfurter Allgemeine Zeitung (online) Frankfurter Allgemeine Zeitung GmbH. Dec 12, 2023. Von Svenja Hahn
Komentarų nėra:
Rašyti komentarą