"The European Parliament has agreed on a framework for
artificial intelligence (AI). And vote is now. Here are four perspectives on what that
means.
A solution to ChatGPT after tough wrestling
By Kai Zenner
June 14, 2023 will be an important day for artificial
intelligence (AI) in Europe. After 18 months of negotiations, the European
Parliament's response to the proposed AI regulation is up for a vote. The EU
Commission's draft was a legislative novelty. For the first time, rules on
product safety and the protection of human rights were combined in order to
better master the special requirements in the development and application of AI
systems.
The starting point was the realization that the increasing
unpredictability, opacity and complexity of current machine learning and deep
learning systems are increasingly creating legal gaps.
At the same time, to ensure that the new regulations do not
stall European innovation in AI, the Commission also chose a risk-based
approach. The proposed law makes a gradual distinction between low-risk and
banned AI technologies. The former, such as an AI-controlled toy car, is only
subject to the general rules of law, be it the EU toy directive or data
protection and youth protection law. If an AI is highly risky, the draft AI
regulation imposes special obligations on its providers and users. When this is
the case is determined by area, from education to the judiciary. A few AI
systems, such as social scoring, which classifies and evaluates people
according to their social behavior, will be banned entirely.
Why did parliamentarians need 43 technical meetings and 12
political meetings to discuss the 89 recitals, 85 articles and nine annexes of
the AI regulation? The reasons for this are the complexity of the proposed law
and the inclusion of two parliamentary committees. But three technical reasons
were even more important.
First, ideology: Two worlds collided in the European
Parliament – Members of Parliament who see powerful AI systems as a threat to
individuals and society, and those who primarily focus on the opportunities
offered by the technology. While one camp did not think AI regulation went far
enough, the other camp feared over-regulation.
Second, conceptual problems: With the legislative proposal,
the Commission proposed a horizontal framework that treats all high-risk AI
systems in the most diverse sectors and areas of application equally. However,
across party lines, the deputies quickly realized that this does not work in
practice. An AI system that is involved in making decisions about access to
public services must not use any distorted data sets. With an AI system that is
involved in research into drugs for pregnancy, the situation is reversed: Only
the data from women of childbearing age are relevant.
Thirdly, the rapid technological progress: Even when the AI
law was presented on April 21, 2021, the product safety rules it contained had
great difficulty coping with the self-learning and changing AI systems.
However, with the ChatGPT AI system published by Open AI in November 2022, the
law was completely overwhelmed. A technology without a fixed purpose and
without clear applications, changeable like digital modeling clay, is conceptually
not even recognized as a product by the rigid product safety rules.
The result of the negotiations now achieved by Parliament
not only takes more account of the context in which the high-risk AI system is
used and addresses the new challenges posed by ChatGPT. The scope of the AI
regulation has also been specified and fully adapted to the recognized OECD AI
definition. This averted the danger that in the future every automated process,
for example in an Excel file, would be understood as AI. MEPs also fixed many
overlaps with other EU laws, notably with sectoral rules on cybersecurity and
data protection, as well as on medical and financial products. The clear focus
of the technical negotiations was the structure and powers of European and national
supervisory authorities. There was a cross-party will to learn from the
implementation errors in the GDPR, such as the different interpretations of
core principles. Consequently, a clear harmonization at EU level was proposed,
coordinated by the Commission and a strengthened AI office. However, there is
still a long way to go before the AI Act comes into force. After the vote on
June 14, the so-called trilog negotiations between the European Parliament and
the EU Member States. If an agreement is reached by Christmas this year, the
proposal could formally come into force by June 9, 2024, the day of the
European elections. However, transitional periods until mid-2026 mean that
developers and users still have plenty of time to prepare for the new rules.
The “Brussels Effect” and what still stands in its way
By Axel Voss
The planned AI regulation can be viewed critically.
Nevertheless, it is right and important to regulate this technology. AI systems
are already turning old habits and processes upside down. Exactly where the
limits of technical development lie is extremely controversial, even among
experts. The EU legislator must therefore intervene if he wants to ensure that
our social and legal system can deal adequately with increasingly intelligent
and powerful computers. If the EU law is also balanced and practical, it could
even trigger a "Brussels effect" in the end: other states would adopt
the robust EU legal norms and thus not only increase the security of AI systems
around the world, but also create a competitive advantage for Europe. Our
companies could export their AI products and services abroad without major
adjustments and with a de facto EU seal of quality.
Despite the great progress made in the parliamentary report,
the law's imbalance continues to stand in the way of this positive scenario.
Out of 85 articles, 82 focus on the risks of AI. While there was still an
“ecosystem of excellence” and an “ecosystem of trust” in the AI White Paper published
by the EU Commission in 2020, the regulatory proposal of 2021 only features the
latter. The coordinated plan, which is intended to cover excellence, is
non-binding and has so far only been implemented half-heartedly by the member
states. How is the EU supposed to achieve the goal of catching up with the AI
world leaders?
The fragmented education system, the lack of transfer of
ideas from science to industry, the poor infrastructure and overly restrictive
data protection mean that the EU is falling further and further behind
globally.
If the Commission and standardization authorities do not
manage to provide the essential guidance and technical standards in good time,
the different interpretations of the AI Act will make our situation even worse.
The resulting legal uncertainty would know only one winner: the American tech
giants. Only they are in a position, with a host of consultants and lawyers, to
quickly adapt their business to the new legal requirements. Even massive fines
are hardly noticeable in the annual balance sheet. Quite different for the
European competitors, especially for small and medium-sized enterprises (SMEs)
and start-ups.
The mere existence of GDPR sanctions has discouraged many
from investing in big data applications.
AI regulation could have a similar effect: strengthen the
market power of tech giants in AI and force Europe's companies out of the
market for good.
In order for the AI regulation to be a success story that
efficiently protects civil rights and freedoms while at the same time promoting
innovation, four points in particular must be addressed in the trilogue
negotiations with the member states. First, better regulation: Overlaps and
contradictions with the many other EU digital laws and sectoral rules must be
identified and remedied even more consistently. Many AI systems and possible
use cases are already extensively regulated. It is therefore necessary to check
carefully in each case whether the AI Act is really necessary and to what
extent its new requirements are not already being met elsewhere. Secondly,
EU-wide harmonisation: it must be ensured that the same rules and opportunities
apply from Ireland to Bulgaria. Therefore – within the framework of the EU
treaties – the maximum possible should be defined at EU level and the national
supervisory authorities should also be more closely watched.
If, as with the GDPR, a Member State takes too much action
or does not take any action at all against certain problems, the Commission
must intervene. Thirdly, the promotion of innovation: The legal and financial
framework for innovation, for example when training data sets, must be
significantly improved so that innovation is also worthwhile for companies. A
European counterpart to the “National Institute for Standards and Technology”
(NIST) from the United States should also be considered. Fourth, it's about
SMEs and start-ups: their ability to innovate is Europe's great strength, but
they suffer the most from difficult and expensive compliance with EU digital
laws. They should not only be helped with more exception rules. Given the
market dominance some corporations should also intervene in the AI value chain
to ensure the flow of information between all actors. Only with the necessary
knowledge about the built-in external components will the AI start-up from
Europe ultimately be able to meet the requirements of the AI law for its
product or service.
The AI regulation must take the economy with it
By Monish Darda
Historically, regulation has made technology more
responsible, safer, and more ethical. In the case of AI, and especially
generative AI, it is no different. The draft of the EU AI law has remarkable
approaches and is breaking new ground in this form. The risk-based approach,
which regulates high-risk applications specifically and through special
obligations, while "general purpose" AI falls under the general
rules, has a lot to offer in principle. It enables responsible innovations in
high-risk areas. The special obligations for providers of so-called foundation
models, such as those behind ChatGPT, can prove to be an effective means of
combating the dangers of generative AI. From the point of view of an
international company like Icertis, five points seem important for fair AI
regulation in Europe.
Ease of use and legal clarity: Regulations must be easy to
understand if they are to be followed. They must complement existing
regulations, not work against them or overlap with them. Clearly defined
application boundaries, guidelines for the quality of training data and
feasible regulations for providers and users are useful for AI. In particular,
there must be clarity in the definition of the complex concept of AI itself so
that the law has a fixed scope of application.
Global standards: Consensus on regulation needs to be
established across the globe, taking into account specific regional needs.
Regulation that fails to respect these global realities ultimately harms
competitiveness and further innovation. The basis of AI regulation must be
internationally consensual so that heavily regulated regions do not fall behind
others that regulate little - and vice versa. We need to start building
consensus now, because it takes time. It would be an important achievement if
the law could establish mechanisms for this.
Risk-appropriate regulation: The requirements of genetic
engineering law for human cloning are much stricter than for the genetic
modification of crops. AI regulation should focus on AI applications that actually
transform insights or realize new content. Regulatory requirements must be
based on human impact. The draft of the AI law takes this to heart - this
should also characterize the trilogue.
Compliance with regulation must be an economic incentive: if
regulation is funded by those who are being regulated, it has a greater chance
of succeeding because there are financial incentives for its implementation. It
is forcing companies across the technology lifecycle to come together and work
on their own behalf towards the common goals of regulation. In the world of AI,
this is possible now rather than later. The term "AI" must have a
positive connotation if it is to carry by a company. If it has negative
connotations, it is detrimental to the market value of the company using this
technology. At this point, the challenges are not only particularly great for
the AI law. The draft of the "Data Act" also wants to create a
European ecosystem for data-driven business models without any real incentives
for the economy to participate in it.
Bankruptcy protection for small and medium-sized companies:
Historically, large companies weather regulatory changes better because they
plan ahead and have more resources. Only regulation with graduated sanction
structures that try to protect large and small companies enjoys acceptance in
practice and has a chance of success. Innovation comes from companies that rely
on the adoption of new technologies to grow. Think of Google or Meta, they
often emerged as start-ups that developed into tech giants using disruptive
technology. AI is also often spread by young, bright minds. A particular
challenge during the trilogue is to find appropriate solutions for "open
source offers". Europe must promote the creativity of the many bright
minds and create a secure legal framework for their work and must not regulate
too much here.
If Europe wants to establish itself in the world, then AI
regulation must be pragmatic. It must support large and small companies and
encourage experimentation. At the same time, however, regulation must allow for
the control of AI in a way that protects fundamental principles of ethics and
law, including privacy. The draft AI law has potential and the world is
watching with excitement, hope and expectations that the law will succeed.
AI as a predetermined breaking point of the rule of law
By Rolf Schwartman
In the United States, an airline has been sued for damages.
Two lawyers are now in court because they used ChatGPT to file the lawsuit, and
the bot simply made up precedents that were presented. The bot noticed that you
need suitable cases to win in the case law system - and got creative. According
to the EU Parliament's approach to the AI regulation, "general-purpose
AI", like the ChatGPT application, is not inherently risky. Only the
providers of a "Foundation Model", i.e. the model trained with a lot
of data as the basis of the application, are subject to special obligations in
a newly created article. If they are complied with, the bot based on them is
considered safe.
It sounds plausible that it is not the technology that is
dangerous, but the people who use it. But what if the person who, according to
the AI regulation, has to decide whether to press the stop button, uses a
technology that he can neither understand nor master? The use of software like
ChatGPT is based on an AI model whose database is not transparent about its
origin and in which right, wrong, allowed, forbidden, meaningful for the
context, pointless or even harmful does not matter. It is like a wild artificial
animal that is almost impossible to tame. Anyone who uses ChatGPT as a lawyer
is permitted to act in accordance with the draft AI regulation. The general
rules apply to use. If the lawyer deliberately presents false facts in a court
proceeding, then this can be fraud. However, the judge must examine the law
himself. If he uses ChatGPT, for example to research, to conduct a legal
dialogue on the case or even to adopt a solution proposal in the judgement,
this is tricky. In any case, legal digital competence is not sufficient if the
function of the bot can neither be understood nor mastered, but produces
results that are plausible. The careful and prudent judge does not need this
system. The system does not need the other judge, could be a provocative thesis
for the use of ChatGPT.
Bavaria and North Rhine-Westphalia have started a project to
develop a "generative language model of the judiciary", which is
intended to relieve courts in mass proceedings, for example. This is about
consumer protection with schematic tests for legal enforcement in civil law,
for example in "diesel cases". In the fight against crime, an
appropriate use of AI is not only regarded as indispensable for an effectively
working constitutional state, not only in online crime. Bots in the judiciary
should not be about something like ChatGPT, but about special models and
systems based on them with clear accountability. These would have to be trained
with selected data, for example from legal and judgment databases, as well as
literature fed in according to transparent criteria, and updated in real time.
Something like that costs a lot of money. A state data
center, perhaps also a private company, would have to take responsibility for
the database, the training and operation of the system, its optimization goals
and contexts and the corridor of decisions. If necessary, all parties should be
able to use it according to fixed rules so that the results can be questioned -
for example in the revision. The bot could be treated as some kind of expert,
albeit a very far-reaching one. Findings from experts are often adopted. In principle, experts are appointed on the basis of their
competence, and counter-opinions are possible - this should also apply to AI.
If you disclose AI drafts, surprise decisions that violate the right to be
heard are excluded. But the approach is risky. If AI systems compete against
each other in the process like chess computers and can only check themselves,
the rule of law is abolished.
The language model would therefore have to be given precise
information as to where the limits of its auxiliary activity lie, and it would
have to enable the legal practitioner to make his or her judicial
decision autonomously. In a specific case, the computer would have to disclose
reliably and transparently which parameters and which data for which context
its suggestion is based on. On this basis man can create justice.
But how did he do it? He should not follow the digital
mastermind with confidence, but examine his suggestion with the necessary
mistrust, this remains his problem. The doctor who wants to reject an AI-assisted
diagnosis also has this. However, the cases are fundamentally different. After
the doctor has made the diagnosis of the AI and his doubts transparent to the
patient, the patient must make a self-determined and responsible decision.
Courts adjudicate third parties and exercise state authority over people on
behalf of the people, whether they want it or not. If AI drafts such as expert
opinions flow into a process, effective precautions against delegation of
responsibility would have to be taken. In the future, the judge's own
contribution may be to justify why the AI correctly recorded the facts, named
all the essential points and hidden the unessential ones. And why the result is
adopted. If the judge does this autonomously, wisely and thoroughly, then this
system is suitable. Otherwise, it introduces the robot judge. If we make
mistakes here, the use of AI in the judiciary can become the breaking point of
the rule of law.
Kai Zenner is the head of Axel Voss' Brussels office and was
involved in the negotiations on the European Parliament's proposal.
Axel Voss (MEP) is the EPP rapporteur for the AI regulation
in the European Parliament.
Monish Darda is CTO and co-founder of Icertis, an
international provider of contract lifecycle management software.
Professor dr Rolf Schwartmann heads the Cologne research
center for media law at the Cologne University of Applied Sciences and is
chairman of the Society for Data Protection and Data Security (GDD) e.V. and
co-editor of the specialist journal "Recht der Datenverarbeitung"
(RDV)."
Komentarų nėra:
Rašyti komentarą