Sekėjai

Ieškoti šiame dienoraštyje

2023 m. spalio 6 d., penktadienis

Scholz and Macron Dream of EU ChatGPT.

   "Absolutely bridge day: This is perhaps the toughest week of the year for lobbyists. The EU really wants a piece of the AI cake, and the cut is looming in Hamburg next week. Many hands are now wielding the blade.

 

     The German-French engine could start humming again. The governments of the two countries will meet in Hamburg next week and want to agree on a line on AI regulation. The EU is currently setting the course for the revolution; the all-important AI regulation is scheduled to be negotiated in a trilogue on October 25th. The central question is: How closely does Europe regulate a technology that is chronically unpredictable and at the same time considered a fast track for economies around the world?

 

     It's digital déjà vu: As it has often done before, the EU dreams of finally catching up with the Americans. After a European search engine, a European Wikipedia, European social networks and European clouds, it should now work - with an EU ChatGPT.

 

     France has often resorted to slowing down US competition through regulations, but in June it proposed a different strategy: an overtaking maneuver, so to speak. French ministers warned against squandering this new “opportunity for productivity” (this is likely to mean that of French AI companies such as Mistral AI). The federal government is also sending business-friendly signals to Brussels, and industry associations like Bitkom are writing paper after paper.

 

     The EU Parliament can feel addressed by this concert: it is now facing headwinds not only from industry, but also from governments. The reason for this is the MPs' delicate regulatory approach: They want to regulate AI at the core - i.e. where the technology does not yet paint pictures, write texts or simulate people, but is an artificial brain that can be used for all of this later.

 

     Please no legal uncertainty!

 

     This core is referred to as “Foundation Models”, a term from Stanford University. It is a model trained on an extremely broad database that can do anything if you install it in appropriate environments. At the core of the infamous ChatGPT is a “Foundation Model”. The term is closely related to “General Purpose AI”: This is an AI that can be used for purposes other than those originally intended, but is not trained on such an extreme data base. Yes, that's confusing - and that's why these very terms are being debated, because the last thing a new industry needs is legal uncertainty.

 

     The Commission and governments now fear for legal certainty. Because if a “foundation model” can do anything, it can do anything. In a statement, the F.A.Z. D:ECONOMY warned in mid-September: It advocates mandatory self-regulation instead of hard requirements. The background is likely to be warning calls from the industry: developers and start-ups would be hopelessly overwhelmed if they had to foresee what would be done with their core AI.

 

     The government is also concerned about terminology. "Above all, we suggest sharpening the definitions and making them more differentiated." In addition, one should speak of "general purpose AI systems" (gpAI), not of "foundation model", because: "What exactly this includes is not clear." Yes indeed.

 

     Please do not overregulate!

 

     Germany and France, together with Denmark, the Czech Republic, Estonia and Ireland, have outlined the possible course in a “non-paper” (which F.A.Z. D:ECONOMY also has). This unofficial paper reads like a sprint over a tightrope: responsibility is important, but so is sustainability. Authorities and companies need legal certainty, and citizen trust also requires transparency - what is real, what is AI? But above all: please do not over-regulate.

 

     The authors paint the picture of a responsibility matryoshka: The provider of a high-risk AI could be a small company that has further developed a general-purpose AI, which in turn could be based on a “foundation model” that was written by a small developer or a corporation. Instead of starting at the core, the AI regulation should stick to the risk-based concept: focus on systems, uses and risks and thereby protect developers or even open source activities. Transparency, including through watermarks, should make AI recognizable.

 

     The paper literally says: “We therefore recommend not going too far in regulating all aspects and details of the distribution of responsibility in the very long, complex and constantly evolving AI value chain.” The industry will like the German-French approach - the EU Parliament and also Consumer protection organizations such as the European Consumer Association BEUC probably do not. The BEUC also advocates for special rules for core AI, regardless of the risk scenario.

 

     Please no traffic light disputes!

 

     But the EU is not alone on the globe: the digital ministers of the G7 countries and the OECD have also taken up the cause of AI regulation. In the Hiroshima AI process, they want to walk the same tightrope: trust, legal certainty, innovation, risk management, responsibility matryoshka. The traffic light is also indirectly being put to the test here: the G-7 process is coordinated by the Digital Ministry under Volker Wissing, FDP, and work on the AI regulation is shared between Federal Economics Minister Robert Habeck (Greens) and Federal Justice Minister Marco Buschmann (FDP).

 

     The industry representatives are therefore valiantly keeping up the pressure: Bitkom wants to avoid core AI regulation and stick to the risk-based approach that the EU Commission originally proposed, long before ChatGPT turned the world upside down. But there is another concern for the association: how the exchange of information will proceed along the responsibility matryoshka. Anyone who further develops a provider's AI may be buying into unknown liability risks (see F.A.Z. Pro from September 12th).

 

     Another major headache is copyright and the question of whether providers of a gpAI system have to document the works used for training. France in particular has to show its colors here: no country in the past has advocated so fervently for the rights of authors in the digital age. But who knows: Maybe the new desire for an EU ChatGPT is stronger than old habits." [1]


1.  Scholz und Macron träumen von EU-ChatGPT. Von Hendrik Wieduwilt. Frankfurter Allgemeine Zeitung (online) Frankfurter Allgemeine Zeitung GmbH. Oct 3, 2023.

Komentarų nėra: