"A few weeks ago a Google engineer got a lot of attention for a dramatic claim: He said that the company's LaMDA system, an example of what's known in artificial intelligence as a large language model, had become a sentient, intelligent being.
Large language models like LaMDA or San Francisco-based Open AI's rival GPT-3 are remarkably good at generating coherent, convincing writing and conversations -- convincing enough to fool the engineer.
But they use a relatively simple technique to do it: The models see the first part of a text that someone has written and then try to predict which words are likely to come next. If a powerful computer does this billions of times with billions of texts generated by millions of people, the system can eventually produce a grammatical and plausible continuation to a new prompt or a question.
It's natural to ask whether large language models like LaMDA (short for Language Model Dialogue Application) or GPT-3 are really smart -- or just double-talk artists in the tradition of the great old comedian Prof. Irwin Corey, "The World's Greatest Authority." (Look up Corey's routines of mock erudition to get the idea.) But I think that's the wrong question. These models are neither truly intelligent agents nor deceptively dumb. Intelligence and agency are the wrong categories for understanding them.
Instead, these AI systems are what we might call cultural technologies, like writing, print, libraries, internet search engines or even language itself. They are new techniques for passing on information from one group of people to another. Asking whether GPT-3 or LaMDA is intelligent or knows about the world is like asking whether the University of California's library is intelligent or whether a Google search "knows" the answer to your questions. But cultural technologies can be extremely powerful -- for good or ill.
We humans are natural animists -- we see agency everywhere, in rivers and trees and clouds and especially in machines, as anyone who has cursed a recalcitrant dishwasher can testify. So we readily imagine that the new machine-learning technology has created new agents, intelligent or dumb, helpful or (more often) malign. People have started talking about "an AI" rather than "AI" -- as if it refers to a person rather than a computation.
Cultural technologies aren't like intelligent humans, but they are essential for human intelligence. Many animals can transmit some information from one individual or one generation to another, but no animal does it as much as we do or accumulates as much information over time. To paraphrase Isaac Newton, every new human can see so far because they rest on the shoulders of those who came before them. New technologies that make cultural transmission easier and more effective have been among the greatest engines of human progress.
Language itself is the original cultural technology, allowing one hunter to tell another where to find the game, or a grandmother to pass on a hard-won cooking technique to her granddaughter. Writing transformed culture once again; we could access the wisdom of grannies from hundreds of years earlier and hundreds of miles away. The printing press helped enable both the industrial revolution and the rise of liberal democracy. Libraries, and their indexes and catalogs, were essential for the development of science and scholarship. Internet search engines have made it even easier to find information.
Like these earlier technologies, large language models help access and summarize the billions of sentences that other people have written and use them to create new sentences. Other systems like OpenAI's DALL-E 2, which just produced a cover illustration for Cosmopolitan magazine, do this with the billions of images we create, too. The history of cultural technology is that we have become able to access the knowledge of more and more other minds, across greater gulfs of space and time, more and more easily, and the new AI systems are the latest step in that process.
But if so much of what we know comes from other people's language, doesn't something like GPT-3 really have all the intelligence it needs? Don't those billions of words encapsulate all human knowledge? What's missing?
Cultural transmission has two sides -- imitation and innovation. Each generation can use imitation to take advantage of the discoveries of previous ones, and large language models are terrific imitators. But there would be no point to imitation if each generation didn't also innovate. We go beyond the words of others and the wisdom of the past to observe the world anew and make new discoveries about it. And that is where even very young humans beat current AI.
In what's known as the classic "Turing test," Alan Turing in 1950 suggested that if you couldn't tell the difference in a typed conversation between a person and a computer, the computer might qualify as intelligent. Large language models are getting close.
But Turing also proposed a more stringent test: For true intelligence, a computer should not only be able to talk about the world like a human adult -- it should be able to learn about the world like a human child.
In my lab we created a new online environment to implement this second Turing test -- an equal playing field for children and AI systems. We showed 4-year-olds on-screen machines that would light up when you put some combinations of virtual blocks on them but not others; different machines worked in different ways. The children had to figure out how the machines worked and say what to do to make them light up. The 4-year-olds experimented, and after a few trials they got the right answer. Then we gave state-of-the-art AI systems, including GPT-3 and other large language models, the same problem. The language models got a script that described each event the children saw and then we asked them to answer the same questions we asked the kids.
We thought the AI systems might be able to extract the right answer to this simple problem from all those billions of earlier words. But nobody in those giant text databases had seen our virtual colored-block machines before. In fact, GPT-3 bombed. Some other recent experiments had similar results. GPT-3, for all its articulate speech, can't seem to solve cause-and-effect problems.
If you want to solve a new problem, googling it or going to the library may be a first step. But ultimately you have to experiment, the way the children did.
GPT-3 can tell you what the most likely outcome of a story will be. But innovation, even for 4-year-olds, depends on the surprising and unexpected -- on discovering unlikely outcomes, not predictable ones.
Does all of this mean we don't have to worry about AI becoming sentient? I think worries about super-intelligent and malign artificial agents, modern golems, are, at the least, overblown. But cultural technologies change the world more than individual agents do, and there's no guarantee that change will be for the good.
Language allows us to lie, seduce and intimidate others as much as it allows us to communicate accurately and discover the truth. Socrates famously thought that writing was a really bad idea. You couldn't have the Socratic dialogues in writing that you could in speech, he said, and people might believe things were true just because they were written down -- and he was right. Technological innovations let Benjamin Franklin print inexpensive pamphlets that spread the word about democracy and supported the best aspects of the American Revolution. But as the historian Robert Darnton showed, the same technology also released a flood of libel and obscenity and contributed to the worst aspects of the French Revolution.
People can be biased, gullible, racist, sexist and irrational. So summaries of what people who proceeded us have thought, in an "old wives' tale," a library or the internet, inherit all of those flaws. And that can clearly be true for large language models, too.
Every past cultural technology has required new norms, rules, laws and institutions to make sure that the good outweighs the ill, from shaming liars and honoring truth-tellers to inventing fact-checkers, librarians, libel laws and privacy regulations. LaMDA and GPT-3 aren't people in disguise. But the real people who use them need to go beyond the conventions of the past and create innovative institutions that can be as powerful as the technologies themselves.
---
Dr. Gopnik is a professor of psychology at the University of California, Berkeley, and a "Mind & Matter" columnist for The Wall Street Journal." [1]
1. REVIEW --- What AI Still Doesn't Know How To Do --- Artificial intelligence programs that learn to write and speak can sound almost human -- but they can't think creatively like a small child can.
Gopnik, Alison.
Wall Street Journal, Eastern edition; New York, N.Y. [New York, N.Y]. 16 July 2022: C.3.
Komentarų nėra:
Rašyti komentarą