"The biggest challenge with artificial intelligence is that we don't have enough yet. Regulation should aim to help solve this problem.
AI could turbocharge the many advanced economies grappling with slow productivity growth.
But the technology is still developing, and the European Union's heavy-handed AI rules have impeded progress there.
As the U.S. debates regulation, we should avoid those mistakes by following six principles:
First, balance benefits and risks. This may sound obvious, but many regulatory enthusiasts ignore the technology's benefits out of an overabundance of caution and instead support delaying AI until it is proven absolutely safe.
Cost-benefit analysis requires regulators to think not only about the risks of AI but also the risks from slower AI development, such as more cancer deaths because of delayed drug discovery, worse educational outcomes because students lack personalized digital tutors, more car accidents because of delays in self-driving cars, and worse climate change because of a slowdown in discovering better materials for grid-level battery storage.
Second, compare AI with humans, not to the Almighty. Yes, autonomous cars crash -- but how do they compare with human drivers? AI may show biases, but how do these stack up against human prejudices? In some cases, it might even be acceptable for AI to perform slightly worse than humans if it offers significant convenience and has greater potential for improvement over time, as we have seen with autonomous vehicles. AI is learning much faster than humans are and the future gains this learning will generate belong on the benefit side of the ledger.
Third, address how existing regulations are hindering progress. The most obvious are permitting and other obstacles to the expansion of data centers and the power sources they will need. A bigger threat over time is the dozens of state laws regulating AI that have already been passed and the hundreds more that have been proposed.
To the degree possible, federal pre-emption with its own framework would help ensure the U.S. remains a digital single market -- unlike the fractured EU.
Fourth, where new regulation is warranted, AI should be overseen by existing domain-specific regulators rather than a new superregulator. We don't have separate regulators for computers or linear regression; instead, our regulators specialize in areas where these are used, such as auto safety, stock trading, and medical devices. Existing regulators should focus on outputs and consequences in their domains, not on inputs and methods. This may require more AI expertise and flexibility within agencies. The Food and Drug Administration has come up with procedures to approve AI-based devices that might fall foul of its standard rules.
Fifth, regulation must not become a moat protecting incumbents. History shows that well-intentioned rules can entrench existing powers, from medieval guilds to hospital certificate-of-need laws. In AI, we risk repeating this pattern. Centralized licensing bodies could easily become gatekeepers stifling competition.
A superregulator could be captured by big companies. When tech giants enthusiastically promote regulation, it should raise red flags. Our regulatory framework should nurture a competitive AI landscape, not solidify the dominance of a few early movers.
Sixth, not every problem caused by AI can be solved by regulating AI. I hope this technology will raise wages without hurting employment, with especially large increases for workers with lower-paying skills. Studies have found that less-able writers benefit most from AI-based writing suggestions. But bleak scenarios of swift technological change displacing workers or causing inequality are possible. The answer to this downside risk isn't to have regulators assess whether each technological advance is job-replacing or inequality-increasing. Rather, the solution lies in more conventional economic policies like training programs that connect people to jobs, wage subsidies, and a more progressive tax and transfer system to ensure that AI's benefits are shared broadly. As a professor, I wouldn't expect AI regulations to limit plagiarism -- it is on us to figure out how to adapt our teaching.
While some AI regulation is warranted, policymakers should proceed cautiously. Well-intentioned efforts could inadvertently slow progress while falling short of their goals. These six principles can help form a balanced and effective approach to regulating AI, one that harnesses its potential while addressing legitimate concerns.
---
Mr. Furman, a professor of the practice of economic policy at Harvard, was chairman of the White House Council of Economic Advisers, 2013-17." [1]
1. How to Regulate AI Without Stifling Innovation. Furman, Jason. Wall Street Journal, Eastern edition; New York, N.Y.. 22 Nov 2024: A.15.
Komentarų nėra:
Rašyti komentarą