Sekėjai

Ieškoti šiame dienoraštyje

2026 m. vasario 20 d., penktadienis

AI Can Help Defend Against Cyberattacks


“When Anthropic recently disclosed a cyber espionage campaign that used artificial intelligence to execute much of the attack autonomously, headlines framed it as a new, uncontrollable threat. But what made the incident most notable wasn't that the attackers used AI to add automation and speed. That's to be expected. It was that Anthropic used AI to detect and stop the campaign. In this case, the attackers were faster. That must change.

 

Cybersecurity is becoming a contest between AI systems used by attackers and their targets. The decisive factor is which side has richer data and better models and can act at machine speed.

 

It's harder to defend a network than to attack it. A defender must monitor every virtual door and window; an attacker need only find one that is ajar. In an AI era when attackers can jiggle every virtual doorknob continuously, human defenders don't stand a chance. What matters is how fast we deploy digital defenders to jiggle those doorknobs -- and tighten them -- beforehand.

 

Consider two attacks in recent years: SolarWinds and Colonial Pipeline. Imagine how AI could have made them more disruptive -- or how AI could give defenders the upper hand, by addressing vulnerabilities beforehand and enabling faster recovery after.

 

In the attack on SolarWinds Corp., someone injected malicious code into the software maker's updates, compromising an estimated 18,000 customers. The attackers inserted the bad code at a point in the software build process that is hard to detect.

 

Today AI can do deep, automated audits of software to detect malicious code that a human reviewer could miss amid millions of lines.

 

In the Colonial Pipeline attack, hackers used a compromised password to gain access to the pipeline's network, causing fuel shortages and panic buying at gasoline stations. AI has made this type of attack more dangerous because it can generate thousands of personalized phishing emails to collect more passwords faster. But today Colonial could also use AI to identify systems with easily compromised passwords and prevent the attack.

 

AI can identify an account acting outside its normal pattern and have AI agents block the account's access immediately.

 

The only question is if adversaries will adopt AI faster than defenders are permitted or encouraged to. Some companies are already pushing to the cutting edge of AI defense -- providing examples others should follow.

 

OpenAI's Aardvark is an AI security researcher that continuously reviews code and suggests fixes before vulnerabilities can be exploited.

 

Cisco's Project CodeGuard is an open-source framework that embeds secure coding practices into workflows, guiding AI assistants to generate secure code.

 

Singapore developed Project Moonshot, an open-source framework to stress-test LLMs against attacks.

 

To encourage adoption of AI defense, the U.S. government must lead by example and modernize regulations that make it hard to train defensive AI models. The U.S. government is the world's largest purchaser of IT services. The Defense and Homeland Security departments should deploy AI defense systems and mandate that government contractors do so as well. Procurement power can accelerate adoption: If you want to sell software to the U.S. military, your code must be vetted by an AI agent. And if anything should be secure from cyberattacks, it should be software used in U.S. military operations.

 

AI isn't perfect and can make mistakes. The Pentagon and DHS should publicly share their data on AI cybersecurity and testing effectiveness. Training cyber defense models and using automation in security operations is challenged by some regulations and standards, such as the Health Insurance Portability and Accountability Act, as well as some U.S. state privacy laws and Europe's General Data Protection Regulation. GDPR requires human oversight for AI decisions that have significant effects, which increases the time it takes to contain breaches, particularly cross-border ones. Attackers aren't constrained by such limits; defenders can't afford to be either.

 

DHS should rapidly build out its AI Information Sharing and Analysis Center to fight malicious use of AI. The National Institute of Standards and Technology's centers to help U.S. companies thwart AI-enabled cyberattacks should become a central place for companies continuously to share data and refine models against evolving attack techniques, simulating attacks and fixing vulnerabilities.

 

The industry needs more transparency about AI models' cybersecurity. Many AI companies already publish performance rankings against objective measures. Insurance providers could help by rewarding companies that adopt an accepted benchmark with lower premiums.

 

The U.S. government could provide tools to help companies implement cyber best practices. The new NIST centers could include technical training for businesses, as well as forums to test the security of autonomous agents in no-fail networks such as those in power systems.

 

We are in an arms race. If we rely on human-speed defense against machine-speed attacks, we will lose. We must build a network of continuously learning, secure defensive agents that can detect, reason and react faster than any human.

 

---

 

Ms. Neuberger is a senior adviser to Andreessen Horowitz and a distinguished fellow at Stanford. She served as the White House's deputy national security adviser for cyber and emerging technology, 2021-25.” [1]

 

1. AI Can Help Defend Against Cyberattacks. Neuberger, Anne.  Wall Street Journal, Eastern edition; New York, N.Y.. 20 Feb 2026: A15.

Komentarų nėra: