Sekėjai

Ieškoti šiame dienoraštyje

2025 m. rugpjūčio 27 d., trečiadienis

Europe's Fear of Artificial Intelligence Is Expressed in EU Laws


"In the power struggle between China and America for dominance in artificial intelligence, Europe plays no role. The EU is initially regulating the risks and side effects."

 

In the 1950s and 1960s, the United States and the Soviet Union engaged in a space race. Today, a different international power struggle for technological supremacy is raging: China has replaced the Soviet Union, and instead of the moon landing, the focus is on artificial intelligence (AI). Computer programs trained with large amounts of data – algorithms – can create and process text, images, videos, or other data at the touch of a button. They can accelerate or automate many processes in industry, information technology, or the office. Economists, engineers, and politicians are talking about a new industrial revolution.

 

China recently emphasized its claim to leadership at the World AI Conference in Shanghai. US President Donald Trump wants to secure technological supremacy over China with more data centers and fewer environmental regulations. America, he says, is the country that started the AI ​​race. Trump says: "I'm here to announce that America will win." Vice President J.D. Vance speaks of a "try-first culture" for AI.

 

Europe is not even mentioned in this geopolitical-technological saber-rattling. Brussels also wants to push forward the technology of the future with an action plan and investments. However, many representatives of the European digital economy complain that the EU is miles away from a "try-first culture." The EU's focus is still on regulation rather than on the opportunities offered by AI technology. This is how many entrepreneurs in the industry describe it.

 

Security, data protection, copyright protection, ethically correct use - these are the buzzwords surrounding European regulation of artificial intelligence. The AI ​​Regulation, better known under the English name AI Act, has been in force for a year now. It is now gradually being translated into concrete rules. Since February, AI may no longer be used in the EU for certain applications that the Commission considers particularly risky. These include "social "scoring" systems modeled on China, which monitor citizens and classify them according to their social behavior. The use of AI for facial recognition in public spaces is also prohibited. Starting this Saturday, the rules of the AI ​​regulation will also apply to providers of general-purpose AI models, such as those behind popular applications such as ChatGPT from Open AI or Gemini from Google. Resistance is mounting, and not only from the US government.

 

"Europe is taking the wrong path when it comes to AI," commented Joel Kaplan, chief lobbyist for Meta, the US company behind popular social networks such as Facebook and Instagram. Trump criticized the EU's digital rules as a "kind of tax" that unfairly targets American companies. But there is no discrimination: The rules also apply to European companies. According to reports by the Bloomberg news agency, American diplomats have called in letters to the European Commission and some EU member states to halt the implementation of the regulation. A planned code would place "unreasonable burdens" on AI developers and stifle innovation, they said.

 

European companies are also irritated by the regulation. In July, 45 executives from the manufacturing and digital sectors called in an open letter for a two-year postponement of the implementation of the AI ​​Act. Among those signed were the CEOs of Mercedes, Siemens Energy, and Airbus, as well as the co-founder and CEO of the French AI hopeful Mistral AI. Their criticism is directed against vague requirements, excessive bureaucracy, and insufficient time for preparation. The EU published its definition of general-purpose AI models and who falls under the new rules on July 18, just two weeks before the regulation comes into force. The necessary accompanying national legislation is not even in sight in many EU countries – including Germany. "That's the problem with a legal text that is so open to interpretation," says Robert Kilian, board member of the German AI Association.

 

Because the requirements in the AI ​​Regulation are so vaguely worded, the European Commission commissioned independent legal scholars and AI experts to draw up a code of practice, and to assist in the practical application of the rules. AI providers must therefore state in a three-page form how they train their artificial intelligence models and what data they use to "train" the models. This is intended to improve transparency.

 

To protect copyrights, companies must commit to not using websites with stolen text ("piracy sites") for AI training. Companies should use technology to prevent, as far as possible, artificial intelligence from infringing copyrights in its responses. They should offer a contact point for rights holders to complain about infringements of their copyrights.

 

For the first time, the EU is specifying the security risks it aims to reduce with the Code. These include pornography created with AI based on images of real people, misleading health advice, the use of AI for the development of ABC weapons (Atomic, Biological, and Chemical weapons), or extremist radicalization – this list is not exhaustive. To address these risks, systematic risk analyses, technical mechanisms for blocking potentially dangerous information, and cooperation with external auditors are recommended. These auditors should, for example, test the AI ​​models with millions of queries to ensure that no prohibited content can be generated.

 

The Code is a voluntary commitment. Those who sign are presumed to be in compliance: The EU assumes that signatories will comply with the AI ​​Regulation. The signature does not protect against fines of up to €35 million or seven percent of annual turnover in the event of a violation.

 

Major American AI providers such as Anthropic, Open AI, and Microsoft have announced their intention to sign the code. Meta, however, is blocking it. Some measures in the code go beyond the AI ​​Act, criticizes Meta lobbyist Kaplan. This is a classic dispute over the EU's dynamic regulation, which is also evident in the dispute over the Digital Markets Act. The EU regulates openly in order to be able to adapt to the rapidly evolving digital technology market. Companies, on the other hand, demand legal certainty.

 

The German digital association Bitkom criticizes the stricter obligation for open risk screening for providers of high-performance AI models. This means that companies must continuously search for all conceivable risks that an AI model could cause. "Together with ambiguously defined fundamental rights risks and societal risks, for which there are often hardly any established methods for identification and assessment, this creates new legal uncertainty for European AI providers," says Bitkom Managing Director Susanne Dehmel. The bureaucratic burden must be significantly reduced. "The Code of Practice must not become a brake on Europe as an AI hub."

 

Although regulatory criticism regarding artificial intelligence is primarily directed against Europe, the United States is no wild west for AI developers. Trump recently failed to prohibit states from enacting their own AI regulations. California and other states are planning AI rules similar to those of the EU.

 

Kilian from the German AI Association also sees advantages for the markets in the EU Code. Many European companies have developed AI applications for specific industries based on the large-scale basic models from Open AI, Meta, or Google. "These companies often rely on information from model developers to be able to sell their products to corporate customers," says Kilian. No bank would want to use an AI product if the provider didn't inform them about the known risks. The code is intended to ensure that such information is also passed on to downstream AI companies. "This opens up entirely new markets and the potential for innovation," says Kilian.

 

The dangers posed by the AI ​​Act arise for the markets in an unexpected place – in competition policy. Each EU member state must appoint a national supervisory authority to monitor the AI ​​regulation. In Germany, this is likely to be the Federal Network Agency. These supervisory authorities have extensive powers to examine providers' AI models and training data. They must forward any findings that may be relevant to competition law to the EU Commission and national competition authorities.

 

"This is changing the way competition policy works," says Nicolas Eschenbaum, who teaches at the University of St. Gallen and works for the consulting firm Swiss Economics. Until now, competition authorities have often relied on voluntary disclosures or whistleblowers, perhaps developing suspicions of cartel formation and then requesting the relevant data and information from companies on a case-by-case basis. In the AI ​​market, competition authorities now receive such information and data on a regular basis, thus providing a much more systematic overview. "I expect we will see more proactive authorities as a result," says Eschenbaum.

 

The economist points to the risks of far-reaching transparency obligations for AI companies, which are unusual in other markets. "If companies are required to share trade secrets with each other, this can lead to collusion," says Eschenbaum – especially if competitors learn to understand their competitors' strategies and behavior. It sounds paradoxical: The EU's fear of artificial intelligence and its efforts to regulate for the benefit of customers could contribute to AI providers conspiring to the detriment of customers." [1]

 

 

1. Europas Angst vor der Künstlichen Intelligenz. Frankfurter Allgemeine Zeitung; Frankfurt. 02 Aug 2025: 19.  Von Maximilian Sachse, Frankfurt

 

Komentarų nėra: