Sekėjai

Ieškoti šiame dienoraštyje

2025 m. liepos 20 d., sekmadienis

Meta Says It Won't Sign European Union's General-Purpose AI Code of Practice

 


 

General-purpose AI (GPAI) refers to AI models that can be applied to a wide range of tasks and are not limited to a specific domain or function. These models are typically trained on massive datasets and can be adapted to various downstream tasks, making them highly versatile. Examples include large language models (LLMs) like GPT-4, image generation models like DALL-E, and models like Google's BERT, all of which can be used for tasks ranging from text generation and translation to image creation and question answering.

Here's a more detailed look at examples and characteristics:

Examples of General-Purpose AI:

 

    Large Language Models (LLMs):

 

These models, like GPT-4, can generate human-quality text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

Image Generation Models:

Models like DALL-E can create images from text descriptions, enabling a wide range of creative applications.

Foundation Models:

This term is often used interchangeably with GPAI, and refers to AI models trained on vast datasets that can be adapted to various downstream tasks through fine-tuning.

Natural Language Processing (NLP) models:

Models like BERT, while often associated with specific tasks, can also be applied to a wide range of NLP tasks, including sentiment analysis, named entity recognition, and question answering.

 

Key Characteristics of GPAI:

 

    Versatility:

    GPAI models can be used for a wide variety of tasks, not just one specific function.

 

Adaptability:

They can be adapted to new tasks through fine-tuning or other techniques.

Large Scale:

GPAI models are typically trained on large datasets and have a large number of parameters.

Potential for Systemic Risk:

Because of their wide applicability, flaws in GPAI models can have far-reaching consequences across many sectors.

 

The EU AI Act and GPAI:

The EU AI Act recognizes the potential impact of GPAI models and includes specific regulations for them, particularly those with high-impact capabilities that could pose systemic risks. These regulations aim to ensure that GPAI models are developed and used responsibly.

 

“Meta Platforms won't sign the European Union's code of practice for general-purpose artificial intelligence because it adds uncertainty and goes beyond the scope of AI legislation in the bloc, Chief Global Affairs Officer Joel Kaplan said.

 

The European Commission, the EU's executive arm, last week published the final version of a code of practice for general-purpose AI that model providers can choose whether to sign. EU officials said the code included guidance on safety and security, transparency and copyright to help signatories comply with the bloc's wide-ranging legislation on AI.

 

EU lawmakers last year approved the AI Act, a law that bans certain uses of the technology, rolls out new transparency guidelines and requires risk assessments for AI systems that are deemed high-risk.

 

Rules on general-purpose AI will be effective for companies as of Aug. 2. The commission's AI Office, a body that oversees implementation of the law, will enforce rules on new AI models after a year, and two years later for existing models. Companies that breach the law risk fines of up to 7% of their annual global revenue.

 

"Europe is heading down the wrong path on AI," Kaplan wrote in a LinkedIn post. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act."

 

Thomas Regnier, spokesman for the European Commission, said AI model providers that opt not to sign the code would still have to comply with the AI Act and might be exposed to more regulatory scrutiny.

 

Meta is the latest tech giant to criticize the EU's push to regulate AI. Earlier this month, chief executives of European companies, including Mistral AI, ASML Holding and Airbus, signed a letter asking the commission to delay enforcement of the law, saying overlapping and complex regulations were preventing the EU from becoming an AI leader.

 

"We share concerns raised by these businesses that this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said.

 

The announcement from Meta comes days after OpenAI said it would sign the code, subject to the current version's being formally approved by the AI board -- a body that includes representatives from each of the EU's 27 member states. "Signing the code reflects our commitment to providing capable, accessible and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age," the ChatGPT maker said last week.

 

The EU is seeking to catch up with the U.S. and China on AI and wants to develop a network of so-called AI gigafactories to help companies train the most complex models. Those facilities will be equipped with roughly 100,000 of the latest AI chips, around four times the number installed in AI factories being set up currently.

 

OpenAI said it had submitted expressions of interest to take part in the process for the rollout of gigafactories in Europe.” [1]

 

1. EXCHANGE --- Meta Says It Won't Sign European Union's AI Code of Practice. Orru, Mauro.  Wall Street Journal, Eastern edition; New York, N.Y.. 19 July 2025: B9. 

Komentarų nėra: