Sekėjai

Ieškoti šiame dienoraštyje

2026 m. balandžio 8 d., trečiadienis

Anthropic Previews Tools to Find Bugs. Does Anthropic Have Chinese Competition in This?


Yes, Anthropic has significant competition from Chinese AI companies in the development of AI tools designed to find software bugs and analyze code, with some Chinese competitors reportedly operating on an industrial scale to match or mimic Claude's capabilities.

Chinese Competition and "Distillation" Allegations

As of February 2026, Anthropic accused three prominent Chinese AI labs—DeepSeek, Moonshot AI, and MiniMax—of engaging in "industrial-scale" campaigns to steal the capabilities of its Claude model.

 

    Method: These companies allegedly used tens of thousands of fraudulent accounts to interact with Claude to "distill" or extract its reasoning and coding skills, creating a direct competitive threat to Anthropic's own technology.

    Focus on Coding: Anthropic stated that these Chinese companies targeted areas where Claude is considered a leader, specifically coding, agentic reasoning, and tool use.

 

Key Chinese Competitors in AI Coding/Debugging

 

    DeepSeek: Widely cited as the most prominent competitor, with its DeepSeek-R1 (released Jan 2025) and subsequent models having matched the coding performance of US industry leaders. It is recognized for offering high-level reasoning and coding capabilities at a fraction of the cost of US models.

    Zhipu AI (Z.ai): Released the GLM-4.6 model, which has shown improved coding abilities and is positioned as a competitor for coding agents.

    Alibaba Cloud: Its Qwen models have frequently appeared in global leaderboards as top competitors to Anthropic and OpenAI in reasoning and coding.

    ByteDance: Known for its Doubao-Seed-Code assistant, which has competed on AI coding benchmarks.

 

Contextual Factors

 

    "Vibe Coding" Battles: Anthropic's Claude 3.5 Sonnet and newer versions (like Mythos) are highly skilled at debugging, but Chinese competitors, particularly through open-source models, are challenging this, notes a 2026 report.

 

    AI Cyber Espionage: In a November 2025 incident, Chinese state-sponsored actors were identified manipulating Anthropic's own tools (Claude Code) to perform cyberattacks, demonstrating that these groups are heavily invested in using advanced AI for identifying and exploiting software vulnerabilities.

 

    Efficiency and Cost: While Anthropic’s new Claude Mythos Preview is designed to autonomously find bugs in critical software, Chinese alternatives are actively aiming to provide similar "agentic" capabilities that can both find and fix bugs, according to recent AI analysis.

Public Release of Chinese Tools

 

    DeepSeek & Others: Chinese firms have historically released models and APIs to the public. For instance, DeepSeek-R1 (released Jan 2025) and its smaller distilled versions are open-source and widely available.

 

In short, while Anthropic has taken the lead in restricting its most dangerous bug-finding model, Chinese competitors have closed the performance gap and offer, in some cases, more accessible AI coding tools to the public.

 

 

“Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software.

 

The company is making a preview model of its new AI model, called Mythos, available to about 50 companies and organizations that maintain critical infrastructure, including Amazon, Microsoft, Apple, Alphabet-owned Google and the Linux Foundation.

 

Cybersecurity researchers and software-makers worry that artificial intelligence is becoming so good at exploiting vulnerabilities that it could cause widespread online disruption. Security experts have predicted that AI models will discover an avalanche of software bugs, and the effort is set to help companies stay one step ahead of cyber criminals and other threats.

 

Mythos has proved to be so capable at potentially dangerous things such as finding and exploiting software bugs that Anthropic has, at present, no plans to release it to the general public, said Logan Graham, the head of Anthropic's Frontier Red Team, which evaluates Claude for risks. "We need to know that we can release it safely, and it's not exactly clear how we can do that with full confidence," he said.

 

Over the past six months, cybersecurity researchers have become increasingly worried that AI systems are not only becoming better at finding bugs, but that they are also shrinking the window of time between when a bug is disclosed and when it can be exploited with working attack software.

 

Late last year, researchers at Stanford University found that AI software was almost as good as humans at finding and exploiting bugs on a real-world network.

 

And earlier this year Anthropic's Claude Opus 4.6 model found more high-severity bugs in the Firefox browser in two weeks than the rest of the world typically reports in two months.

 

When measuring dollar cost to find a bug, Mythos is about 10 times as efficient as previous AI models, Graham said. Details of Mythos's capabilities were previously reported by Fortune.

 

While Anthropic has no immediate plans to release Mythos, other models will likely match its bug-finding capabilities within the next few years, Graham said.

 

"We basically need to start, right now, preparing for a world where there is zero lag between discovery and exploitation," he said.” [1]

 

1. Anthropic Previews Tools to Find Bugs. McMillan, Robert.  Wall Street Journal, Eastern edition; New York, N.Y.. 08 Apr 2026: B4.

Komentarų nėra: