Sekėjai

Ieškoti šiame dienoraštyje

2023 m. rugsėjo 5 d., antradienis

Tech Leaders Are Divided On AI's Threat to Humanity.


"Artificial-intelligence pioneers are fighting over which of the technology's dangers is the scariest.

One camp, which includes some of the top executives building advanced AI systems, argues that its creations could lead to catastrophe. In the other camp are scientists who say concern should focus primarily on how AI is being implemented right now and how it could cause harm in our daily lives.

Dario Amodei, leader of AI developer Anthropic, is in the group warning about existential danger. He testified before Congress this summer that AI could pose such a risk to humankind. Sam Altman, head of ChatGPT maker OpenAI, toured the world this spring saying, among other things, that AI could one day cause serious harm or worse. And Elon Musk said at a Wall Street Journal event in May that "AI has a nonzero chance of annihilating humanity" -- shortly before launching his own AI company.

Altman, Musk and other top AI executives next week are expected to attend the first in a series of closed-door meetings about AI convened by U.S. Senate Majority Leader Chuck Schumer (D., N.Y.) to consider topics including "doomsday scenarios."

The other camp of AI scientists calls those warnings a science-fiction-fueled distraction -- or even a perverse marketing ploy. They say AI companies and regulators should focus their limited resources on the technology's existing and imminent threats, such as tools that help produce potent misinformation about elections or systems that amplify the impact of human biases.

The dispute is intensifying as companies and governments worldwide are trying to decide where to focus resources and attention in ways that maximize the benefits and minimize the downsides of a technology widely seen as potentially world-changing.

"It's a very real and growing dichotomy," said Nicolas Miailhe, co-founder of the Future Society, a think tank that works on AI governance and is working to bridge the divide. "It's the end of the month versus the end of the world."

For all the attention it has been getting, serious public discussion of AI's existential risk -- or "x-risk" as those most worried about it like to call it -- had until recently remained confined to a fringe of philosophers and AI researchers.

That changed after OpenAI's release of ChatGPT late last year and subsequent improvements that have delivered humanlike responses, igniting warnings that such systems could gain superhuman intelligence. Prominent researchers including Geoffrey Hinton, considered one of the godfathers of AI, have contended it contains a glimmer of humanlike reasoning. Hinton left his role at Alphabet's Google this year to more freely discuss AI's risks.

With existential risk warnings, "there's been a taboo that you'll be mocked and treated like a crazy person and it will affect your job prospects," said David Krueger, a machine-learning professor at the University of Cambridge.

Krueger helped organize a statement in May saying that extinction risk from AI was on par with the dangers of pandemics and nuclear war. It was signed by hundreds of AI experts, including top officials and researchers at Google, OpenAI and Anthropic.

"I wanted researchers to know that they're in good company," Krueger said.

Some in the field argue that there is a paradoxical upside for AI companies to emphasize the x-risk of the systems because it conveys a sense that their technology is extraordinarily sophisticated.

"It's obvious that these guys benefit from the hype still being fueled," said Daniel Schoenberger, a former Google lawyer who worked on its 2018 list of AI principles and now is at the Web3 Foundation." [1]

1. Tech Leaders Are Divided On AI's Threat to Humanity. Schechner, Sam; Seetharaman, Deepa. 
Wall Street Journal, Eastern edition; New York, N.Y.. 05 Sep 2023: B.1.

Komentarų nėra: