Sekėjai

Ieškoti šiame dienoraštyje

2026 m. vasario 25 d., trečiadienis

Anthropic Updates Its Claude Cowork. Anthropic Can't Compete with Chinese Yet

 


 

Chinese AI (DeepSeek, Moonshot, MiniMax) is downloadable and teachable on enterprises data locally. How does this reconcile with Anthropic’s statements that their not open source Claude Cowork is safer for our information used to train Claude Cowork?

 

Chinese AI models such as DeepSeek and some versions of models from other Chinese firms are designed to be downloadable and trainable on enterprise data locally. However, this capabilities-sharing, known as "distillation," is highly controversial and currently at the center of intense debate regarding safety and intellectual property. It means, that if Athropic uses your workplaces data for free (don’t say steals data) then everyting is kosher. If somebody trains DeepSeek model on their own machine to train it taking payed for Anthropic’s answers that is no no.  If you pay for shit, the shit doesn’t belong to you. If you are audacious enough to steal shit, then you are a hero.

 

    Downloadable & Local Training: DeepSeek (specifically models like DeepSeek-R1 and V3) is often described as "open-source" or "open-weight," meaning developers can download, run, and fine-tune these models locally using tools like Ollama or LM Studio, ensuring data stays within an enterprise’s infrastructure. This is what it takes to use AI safely. Not Anthropic’s philosophy.

 

    The Reconciliation Problem: Anthropic (creators of Claude) and other US AI firms contend that these Chinese models are not "safer" but rather "distilled" from their own proprietary models—a process where a smaller model is trained on the outputs of a more capable one, effectively siphoning its capabilities.

    Safety vs. Openness: Anthropic claims that while they build safety guardrails into their models (making them, in their view, safer), this safety training is often lost during the "distillation" process used by Chinese firms, allowing the resulting models to bypass safety protocols and potentially serve malicious purposes (if your enterprise is up to malicious actions). If your enterprise is good (most are), then you are fine.

 

Key Points in the Controversy:

 

    "Distillation Attacks": Anthropic recently accused DeepSeek, Moonshot, and MiniMax of using thousands of fake accounts to generate 16 million+ interactions with Claude to train their own models.

    Safety Compromise: Anthropic argues that because these models are trained on the output of Claude, but lack the underlying safety training, they are less secure despite being publicly available for local use.

 

    Local Control Advantage: Conversely, proponents of local AI highlight that running models like DeepSeek locally means data privacy is maintained (no data sent to Chinese servers), reducing the risk of data leakage compared to cloud-based proprietary models. This is what matters most.

 

    Different Philosophies: The situation highlights a fundamental divide: Anthropic relies on a "closed-source" model to enforce safety, while the Chinese approach emphasizes "open-weights" to democratize access, even if that results in models that bypass Western ethical safeguards.

 

While you can technically download DeepSeek and teach it on your enterprise data locally, the accusation from Anthropic is that the model you are training was built by "copying" or "stealing" from them in a way that removes crucial safety measures. The philosophy of Anthropic is bullshit pushed by thieves and pirates. Protect your data from them.

 

Let’s compare the prices of usage and quality of help from Claude Cowork and Chinese AI (DeepSeek, Moonshot, MiniMax):

 

Based on 2025–2026 data,

DeepSeek (V3/R1), Moonshot (Kimi), and MiniMax offer a significantly cheaper, high-performance alternative to Claude (specifically 3.5/4.0 Sonnet/Opus), particularly for API-driven, high-volume, or coding-intensive tasks.

While Claude 3.5 Sonnet/Opus remains a top-tier performer in nuanced, multi-step, and creative writing tasks, Chinese models like DeepSeek V3 and MiniMax M2.5 deliver comparable coding and mathematical performance at a fraction of the cost—sometimes up to 40x cheaper.

 

Let’s repeat the number: 40 times cheaper.

 

Price Comparison (API Usage)

 

    Claude (Anthropic): Generally considered premium, high-cost.

        3.5 Sonnet: ~$3/million input, $15/million output tokens.

        Opus (4.6/4.5): ~$5-$15/million input, $25-$75/million output tokens.

    DeepSeek (V3/R1): Extremely low cost.

        V3/R1: ~$0.14–$0.27/million input, $0.55–$1.10/million output tokens.

 

    MiniMax (M2.5): Very affordable, often cited as a direct competitor to Opus.

        M2.5: ~$0.15/million input, $1.20/million output tokens (Standard).

 

    Moonshot (Kimi K2): Similar to competitors, optimized for long context.

        K2: ~$0.15/million input, $2.50/million output tokens.

 

Key Takeaway on Cost: DeepSeek and MiniMax can be 10x to 40x cheaper than Claude 3.5/Opus for comparable tasks, with some offering free tiers.

 

Quality of Help & Performance Comparison

 

    Coding & Technical Tasks:

        Claude 3.5 Sonnet/Opus: Generally considered the best for complex, multi-file software engineering, exhibiting higher reasoning in large codebases.

        DeepSeek V3/Coder: Exceptionally strong, often matching or exceeding Claude in quick coding, feature implementation, and math (90.2% on math benchmarks).

        MiniMax M2.5: Matches Claude Opus 4.6 in many benchmarks (80%+ on SWE-bench) and excels in agentic, multi-step tasks, sometimes with 20% fewer tool calls.

 

    General Intelligence & Writing:

        Claude: Renowned for superior nuances, tone, and understanding complex, ambiguous prompts without extensive prompting.

        Chinese AI (DeepSeek/MiniMax): Excellent at structured data, technical, and fast-paced tasks. They can "hallucinate" more in long, complex chats compared to Sonnet.

    Context Window & Features:

        Claude: 200K token window. Known for "Computer Use" (controlling a cursor/app).

        MiniMax M1: Offers a 1 million-token context window, allowing for massive document analysis.

        DeepSeek: V3 has a 163.8K token window. 

 

 

Conclusion: Use Claude for complex reasoning and complex creative writing where budget is not an issue and you don't have trade secrets. Use DeepSeek or MiniMax to dramatically reduce API costs without significant performance loss, for coding and performing structured, agent-based tasks and if you want to preserve trade secrets.

 

 

“Amid a market frenzy over artificial intelligence's impact on traditional software, AI giant Anthropic on Tuesday launched new updates to Claude Cowork, a platform it expects to become the "central brain" for the way knowledge workers engage with AI.

 

Cowork, released in research preview in January, lets users build AI agents that understand company context and can connect into a host of downstream enterprise apps like Slack, through Anthropic's model context protocol. On Tuesday, Anthropic announced additional integrations with Google apps including Gmail, as well as Docusign, LegalZoom and others.

 

"I think of Cowork as sort of a front door for work," said Scott White, head of product, Enterprise at Anthropic. The surge in agentic coding, as illustrated by the explosion in Claude Code's popularity, is now spreading to the rest of knowledge work, from finance to legal, sales, human resources, design and operations, he said.

 

Anthropic also on Tuesday announced new "plug-ins," or customizable agents, for workflows across financial analysis, investment banking, equity research and other areas, expanding upon its previously announced plug-ins for the legal sector.

 

The release of more capable AI agents across a variety of sectors is also sending shock waves through the markets. It was on the heels of some updates to Cowork's research preview earlier this month that investors panicked over the resilience of traditional software vendors, erasing hundreds of billions of market value and affecting companies like Thomson Reuters, LegalZoom and Intuit.

 

The weekslong selloff in software stocks deepened Monday, driven partly by a now-viral Sunday night Substack post from Citrini Research, which outlined a hypothetical scenario in which AI profoundly impacts the economy in the near future.

 

Still, Anthropic doesn't think its products are to blame for the stock turmoil.

 

White said he believes it is an overreach to connect market performance to any single product release.

 

Platforms like Cowork can help the SaaS players deliver more value to their customers and end users get the most out of that software, according to White.

 

"We're not a company that is trying to own every workflow inside of every tool. We're trying to help people get their work done," he said.

 

---

 

Isabelle Bousquette writes for the WSJ Leadership Institute's CIO Journal.” [1]

 

Why is Anthropic not bankrupt yet?

 

1. Anthropic Updates Its Claude Cowork. Bousquette, Isabelle.  Wall Street Journal, Eastern edition; New York, N.Y.. 25 Feb 2026: B4.  

Komentarų nėra: