The assertion that enterprises cannot use Anthropic for serious applications due to security risks is a topic of intense debate in the AI industry, with arguments both supporting and opposing this view.
Anthropic's Enterprise Security and Data Privacy Controls
Anthropic offers specific, secure environments designed for enterprise-grade security and data privacy:
No Data Training by Default: According to Anthropic's privacy policy, data from Team and Enterprise accounts is not used to train their models. So you cannot expect your data to help you, when using Anthropic.
Zero Data Retention: For sensitive, regulated data, organizations can utilize Zero-Data-Retention (ZDR) terms, which ensures prompts and outputs are not stored by Anthropic after processing.
Cloud-Based Security: Enterprises often use Anthropic models via Amazon Bedrock, which offers dedicated infrastructure where data does not leave the customer's secure AWS environment, mitigating concerns about Anthropic accessing data during inference.
The Challenge with Local Training
The core of the argument—that customers cannot train models on their own local hardware—is technically accurate for proprietary models like Claude.
Proprietary Model Constraint: Anthropic's models are hosted in the cloud. Enterprises wishing to train models on their own local hardware (on-premise) generally must rely on open-source alternatives, as Anthropic does not provide the model weights for local hosting.
Fine-Tuning Availability: While fine-tuning is now available for Claude 3 Haiku, it is performed in the cloud, exclusively via AWS Bedrock.
The Role of Open Source Models
The argument that "open source models have a big future" aligns with the trend toward local data control. For enterprises requiring complete data sovereignty, models like Llama (Meta) or Mistral can be run entirely on-premise or in private clouds.
Conclusion
While Anthropic's cloud-based model restricts local hardware training, the company provides strict data privacy agreements (no training on data) and zero-data retention options specifically designed for enterprise security. The choice between Anthropic (for high-level performance with managed security) and open source (for absolute data control) depends on an enterprise’s specific risk tolerance and infrastructure capabilities.
Now let’s switch to good guys:
“Reflection, a startup backed by chip maker Nvidia that is leading an effort to create freely available U.S. AI systems, is in talks to raise $2.5 billion at a valuation of $25 billion, according to people familiar with the matter.
The company is one of a handful of Nvidia-linked startups that are seeking to build a network of "open source" AI models, which businesses, labs and universities can use and repurpose according to their needs.
The deal would add firepower to Reflection, which is central to Nvidia's efforts to create an open-source ecosystem that can run on its chips and counter the growing and increasingly sophisticated offerings in China.
A spokesperson for Reflection declined to comment.
JPMorgan Chase is in talks to participate in the round through its newly formed Security and Resiliency Initiative, which was created in December to back U.S. companies in industries critical to the economy and national security, the people said. The bank said at the time that it planned to invest as much as $10 billion in venture-backed startups through the initiative.
Disruptive, an earlier investor in Reflection, is also expected to invest.
Reflection has developed close ties with Nvidia, which invested about $800 million in a previous funding round that valued the young startup, which has yet to generate meaningful revenue, at $8 billion, The Wall Street Journal has reported.
The $25 billion valuation for Reflection represents its value before the $2.5 billion investment, the people said.
The deal is part of a flurry of investments in neolabs, which are giving priority to long-term research and building new AI models over short-term profits.
Nvidia has been introducing Reflection to potential customers. That includes foreign governments which plan to develop homegrown AI infrastructure that gives them power over how the technology is used.
Reflection recently committed several billion dollars to build models customized for the Korean language in a deal with South Korean conglomerate Shinsegae Group. Thousands of Nvidia chips will power the data center supporting the project. Reflection plans to make more deals like this one, with the goal of becoming the open model used by sovereign clouds in U.S. allies around the world, the people familiar with the matter said. AI boosters have taken to calling such systems "sovereign AI."
Investors have described Reflection as the "DeepSeek of the West" -- an alternative to the open-source models offered by Chinese companies.
U.S. AI labs haven't given priority to open-source models.
"Open models are Trojan horses for the infrastructure they bring with them," Reflection Chief Executive Misha Laskin said in an interview earlier this month.
The Financial Times earlier reported some details of the investment.” [1]
1. Nvidia-Backed Startup Eyes $25 Billion Value. Berber, Jin; Clark, Kate. Wall Street Journal, Eastern edition; New York, N.Y.. 26 Mar 2026: B1.
Komentarų nėra:
Rašyti komentarą