Sekėjai

Ieškoti šiame dienoraštyje

2023 m. gruodžio 25 d., pirmadienis

Europeans know the way how to exclude the Chinese and the Americans from European AI market

 

"TWO THINGS European lawmakers should get credit for are stamina and an extraordinary tolerance for bad food. Representatives of the EU Parliament, member governments and the European Commission, the bloc's executive body, spent nearly 40 hours in a dark meeting room in Brussels until the wee hours of December 9th hashing out a deal on the AI Act, Europe's ground-breaking law on regulating artificial intelligence . Observers shared pictures online of half-eaten sandwiches and other fast food piling up in the venue's rubbish bins to gauge the progress of the talks.

This ultramarathon of negotiation was the endpoint of one of the most diligent lawmaking processes ever. It started in early 2018 with lengthy public consultations and a weighty 52-person "High-Level Expert Group", which in 2020 led to a white paper on which all could comment online (1,250 groups and individuals did so). The legislation has yet to be released because kinks still need to be worked out, but the draft version was a document of nearly 100 pages and almost as many articles.

Was it all worth it? The thorough process certainly has led to a logically coherent legal approach, not unlike that of much product-safety legislation.

In order to give the technology space to evolve, the AI Act's first draft, which the commission presented in April 2021, mainly tried to regulate various applications of AI tools, rather than how they were built.

The riskier the purpose of an AI tool, the stricter the rules with which it needed to comply. An AI-powered writing assistant needs no regulation, for instance, whereas a service that helps radiologists does.

Facial recognition in public spaces might need to be banned outright.

But the idea of focusing on how AI tools are applied was predicated on the assumption that algorithms are mostly trained for specific purposes. Then along came the "large language models" that power such AI services as ChatGPT and can be used for any number of purposes, from analysing text to writing code.

Since these LLMs can be a source of harm themselves, for example by spreading bias and disinformation, the European Parliament wanted to regulate them as well, for instance by forcing their makers to reveal what data they were trained on and how they assessed the model's risks. By contrast, some governments, including those of France and Germany, worried that such requirements would make it hard for small European model-makers to compete with big American ones. The result, after the all-nighter, is a messy compromise that subjects only the most powerful LLMs to stricter rules, such as mandates to assess their risks and to take measures to mitigate them. As for smaller models, there will be regulatory exceptions, in particular for the open-source kind, which allow users to adapt them to their needs.

A second big sticking point was to what extent law-enforcement agencies should be allowed to use facial recognition, which at its core is an AI-powered service. The European Parliament pushed for an outright ban, in order to protect privacy rights. Governments, meanwhile, insisted that they need the technology for public security, notably to protect big events such as the Olympic Games next year in France. Again, the compromise is a series of exceptions.

Real-time facial recognition, for instance, is banned except for certain crimes (such as kidnapping and sexual exploitation), certain times and places and when it is approved be a judge or a similar authority.

"The EU becomes the very first continent to set clear rules for the use of AI," tweeted Thierry Breton, the EU's commissioner for the internal market. Mr Breton is never far from the social-media limelight: during the negotiation marathon, he continuously posted shots of himself in the middle of a huddle. Yet whether the AI Act will be as successful as the General Data Protection Regulation (GDPR), the EU's landmark privacy law, is another question. Important details still need to be worked out. And the European Parliament still needs to approve the final version.

Most important, it is not clear how well the AI act will be enforced—an ongoing problem with recent digital laws passed by the EU, given that it is a club of independent countries. In the case of the GDPR, national data-protection agencies are mainly in charge, which has led to differing interpretations of the rules and less than optimal enforcement. In the case of the Digital Services Act and the Digital Markets Act, two recent laws to regulate online platforms, enforcement is concentrated in Brussels at the commission. The AI act is more of a mix, but experts worry that both the forthcoming "AI Office", which is to be set up in Brussels, and some national bodies will lack the expertise to prosecute violations, which can lead to fines of up to €35m ($38m) or 7% of a company's global revenue.

The GDPR triggered what is known as the "Brussels Effect": big tech firms around the globe complied, and many non-European governments borrowed from it for their own legislation. The AI act may not do the same. Complex compromises and haphazard enforcement are not the only reasons. For one thing, the incentives in AI are different: AI platforms may find it easier to simply use a different algorithm inside the EU to comply with its regulations. (By contrast, global social-media networks find it difficult to maintain different privacy policies in different countries.) And by the time the AI Act is fully in force and has shown its worth, many other countries, including Brazil and Canada, will have written their own AI acts.

The protracted discussions over the AI Act have certainly helped people, both in Europe and elsewhere, to understand better the risks of the technology and what to do about them. But instead of trying to be first, Europe might have done better trying to be best—and come up with an AI act that has more rigour and less piling of exceptions on top of exceptions.” [1]

The Act has to block Chinese and Americans from European AI market. For that companies who employ things that are too tempting for them to be left alone (real time facial recognition, military applications, large models that could be used for everything, for example) has to be out of question in Europe. Companies who do these and other dangerous for humans lucrative activities should be banned in Europe.

·  ·  ·1.  "Europe, a laggard in AI, seizes the lead in its regulation." The Economist, 10 Dec. 2023, p. NA.

Komentarų nėra: