Sekėjai

Ieškoti šiame dienoraštyje

2024 m. spalio 12 d., šeštadienis

This AI Godfather Says AI Is Dumber Than a Pet Cat --- Yann LeCun, an NYU professor and senior researcher at Meta, says warnings about AI's existential peril are 'complete B.S.'

 

"Yann LeCun helped give birth to today's artificial-intelligence boom. But he thinks many experts are exaggerating its power and peril, and he wants people to know it.

While a chorus of prominent technologists tell us that we are close to having computers that surpass human intelligence -- and may even supplant it -- LeCun has aggressively carved out a place as the AI boom's best-credentialed skeptic.

On social media, in speeches and at debates, the New York University professor and Meta Platforms AI guru has battled with the boosters and Cassandras who talk up generative AI's superhuman potential, from Elon Musk to two of LeCun's fellow pioneers, who share with him the unofficial title of "godfather" of the field. They include Geoffrey Hinton, a friend of nearly 40 years who on Tuesday was awarded a Nobel Prize in physics, and who has warned repeatedly about AI's existential threats.

LeCun, 64 years old, thinks that today's AI models, while useful, are far from rivaling the intelligence of our pets, let alone us. When I ask whether we should be afraid that AIs will soon grow so powerful that they pose a hazard to us, he quips: "You're going to have to pardon my French, but that's complete B.S."

Sitting in a conference room inside one of Meta's satellite offices in New York City, he exudes warmth and genial self-possession, and delivers his barbed opinions with the kind of grin that makes you feel as if you are in on the joke.

His body of work, and his perch atop one of the most accomplished AI research labs at one of the biggest tech companies, gives weight to LeCun's critiques.

Born just north of Paris, he became intrigued by AI in part because of HAL 9000, the rogue AI in Stanley Kubrick's 1968 sci-fi classic "2001: A Space Odyssey." After earning a doctorate from the Sorbonne, he worked at the storied Bell Labs, where everything from transistors to lasers were invented. He joined NYU as a professor in 2003 and became director of AI research at what was then Facebook a decade later.

In 2019, LeCun won the A.M. Turing Award, the highest prize in computer science, along with Hinton and Yoshua Bengio. The award, which led to the trio being dubbed AI godfathers, honored them for work foundational to neural networks, the multilayered systems that underlie many of today's most powerful AI systems, from OpenAI's chatbots to self-driving cars.

Today, LeCun continues to produce papers at NYU along with his Ph.D. students, while at Meta he oversees one of the best-funded AI research organizations in the world, as chief AI scientist at Meta. He meets and chats often over WhatsApp with Chief Executive Mark Zuckerberg, who is positioning Meta as the AI boom's big disruptive force against other tech heavyweights from Apple to OpenAI.

LeCun has publicly disagreed with Hinton and Bengio over their repeated warnings that AI is a danger to humanity.

Bengio says he agrees with LeCun on many topics, but they diverge over whether companies can be trusted with making sure that future superhuman AIs aren't either used maliciously by humans, or develop malicious intent of their own.

"I hope he is right," says Bengio.

LeCun thinks AI is a powerful tool. Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it's now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models.

"The impact on Meta has been really enormous," he says.

At the same time, he is convinced that today's AIs aren't, in any meaningful sense, intelligent -- and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous.

If LeCun's views are right, it spells trouble for some of today's hottest startups, not to mention the tech giants pouring tens of billions of dollars into AI. Many of them are banking on the idea that today's large language model-based AIs, like those from OpenAI, are on the near-term path to creating so-called "artificial general intelligence," or AGI, that broadly exceeds human-level intelligence.

LeCun says such talk is likely premature. When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. "It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat," he replied on X.

He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today's "frontier" AIs, including those made by Meta.

Alexander Rives, a former Ph.D. student of LeCun's who has since founded an AI startup, says his provocations are well thought out. "He has a history of really being able to see gaps in how the field is thinking about a problem, and pointing that out," Rives says.

LeCun thinks artificial general intelligence is a worthy goal.

"In the future, when people will talk to their AI system, to their smart glasses or whatever else, we need those AI systems to basically have human-level characteristics, and really have common sense, and really behave like a human assistant," he says.

But creating an AI this capable could easily take decades, he says -- and today's dominant approach won't get us there.

The generative-AI boom has been powered by large language models and similar systems that train on oceans of data to mimic human expression. As each generation of models has become more powerful, some experts have concluded that simply pouring more chips and data into developing future AIs will make them ever more capable, ultimately matching or exceeding human intelligence.

LeCun thinks that the problem with today's AI systems is how they are designed, not their scale. No matter how many GPUs tech giants cram into data centers around the world, he says, today's AIs aren't going to get us artificial general intelligence.

His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that's analogous to how a baby animal does, by building a world model from the visual information it takes in.

The large language models, or LLMs, used for ChatGPT and other bots might someday have only a small role in systems with common sense and humanlike abilities, built using other techniques and algorithms.

Today's models are really just predicting the next word in a text, he says. But they're so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information they've already been trained on.

"We are used to the idea that people or entities that can express themselves, or manipulate language, are smart -- but that's not true," says LeCun. "You can manipulate language and not be smart, and that's basically what LLMs are demonstrating."" [1]

1. EXCHANGE --- Keywords: This AI Godfather Says AI Is Dumber Than a Pet Cat --- Yann LeCun, an NYU professor and senior researcher at Meta, says warnings about AI's existential peril are 'complete B.S.'. Mims, Christopher.  Wall Street Journal, Eastern edition; New York, N.Y.. 12 Oct 2024: B.5.

 

Komentarų nėra: