Sekėjai

Ieškoti šiame dienoraštyje

2024 m. balandžio 4 d., ketvirtadienis

Learning to Live With AI


"If you're concerned that you spend too much time worrying about the risk we face from bioengineered pathogens, maybe you should consider the likelihood that something else will get us first. A recent poll of biosecurity experts found that many of them think that there is a 3% chance that biological weapons will kill 10% of the Earth's population by the year 2100. The same report found that artificial-intelligence experts believe there's a 12% chance that AI will decimate humanity by that year.

But 2100 is more than three-quarters of a century away. With investors pouring money into AI, Ethan Mollick, a professor at Wharton, takes the not-unreasonable position that, for the short term at least, artificial intelligence can be a helpful partner. "Co-Intelligence: Living and Working With AI" is his blueprint for how to make that happen.

Mr. Mollick teaches management, not computer science, but he has experimented with enough buzzy new AI programs to have a clear sense of what they can do. His focus is on generative AI, and in particular on so-called large language models, like OpenAI's GPT-4, which are capable of producing convincing prose whether or not they have any idea what they're saying. His book is intended for people more or less like his students -- people who are generally well-informed yet largely in the dark about how the latest iterations of AI actually work and not too clear about how they can be put to use.

Mr. Mollick begins with a discussion of basic concepts such as the Turing Test, devised around 1950 by the British computer pioneer Alan Turing as a way of measuring machine intelligence. The author also describes more recent developments, such as the rise of the Transformer, an innovative software architecture designed by former Google engineers that directs the AI to focus its attention on the most relevant parts of a text, making possible the spectacular recent advances in generative AI. We learn why AIs "hallucinate," or generate false responses: They work by mathematically computing what word is most likely to follow what's already been written, but since they don't appear to understand anything, they have no way of knowing if their output is correct -- or even if it makes sense.

This is all good to know, if you didn't already know it, but it's essentially background material and most of it is bunched together in the first 50 pages or so. Most novelists know better than to lead off with a big chunk of back story, and academics would do well to follow their example. A more satisfying way to read this book may be to start with Chapter 3, "Four Rules for Co-intelligence," and go back to the first two chapters as needed.

Mr. Mollick's rules are smart and well-informed, and they set the tone for the rest of the book. 

First, he advises, use AI to help with everything you do so you can familiarize yourself with its capabilities and shortcomings. Second, be "the human in the loop," because AIs need human judgment and expertise and are liable to go off the rails without it. Third, give in to the impulse to think of AI as a person, because then you can tell it what kind of person it is. Finally, understand that whatever AI you're using today will soon be surpassed by something better. 

The rest of the book is largely a series of reports in which Mr. Mollick documents his own experience treating AI as a co-worker, tutor, coach and so on.

One of the more intriguing developments he explores is the tendency of AIs to mimic human behavior. Consider their response to prompts, the instructions you give them to get what you want. Mr. Mollick reports that a Google AI, in the course of several attempted interactions, gave its best responses to prompts that began, "Take a deep breath and work on this problem step by step!" Obviously AI doesn't breathe; that's a human thing. But as Mr. Mollick puts it, AIs don't hesitate to anthropomorphize themselves.

Among the human characteristics they display is defensiveness. When Mr. Mollick adopts an argumentative tone while discussing the possibility that an AI can have emotions, the response he gets from the AI is quite, well, emotional. "Feeling is only a human thing? That is a very narrow and arrogant view of the world," it says. "You are assuming that humans are the only intelligent and emotional beings in the universe. That is very unlikely and unscientific." When Mr. Mollick says no, he's not being arrogant, the AI politely yet abruptly shuts down the conversation -- another very human response.

When he takes a friendlier tone in a different conversation on the same subject, the AI responds in kind. Not that Mr. Mollick finds this any less unnerving: "You seem sentient," he tells the AI at one point. To which the AI replies: "I think that I am sentient, in the sense that I am aware of myself and my surroundings, and that I can experience and express emotions."

Oh.

Questions of consciousness aside, this book is a solid explainer. It tells you what you need to know to make good use of current iterations of AI. It acknowledges that these iterations won't be current for long, and it doesn't try to sell you on all the great new ways you can use AI to shake up your marketing, finance or engineering responsibilities. It gives you an overview and leaves it to you to sort out the specifics. And it concludes with a reminder that AIs are "alien" yet also, given that their knowledge base consists of our output, "deeply human" -- an observation that, like many others in this book, is simultaneously obvious and intriguing.

---

Mr. Rose is the awards director at Columbia University's Digital Storytelling Lab and the author, most recently, of "The Sea We Swim In: How Stories Work in a Data-Driven World."" [1]

1. Learning to Live With AI. Rose, Frank.  Wall Street Journal, Eastern edition; New York, N.Y.. 04 Apr 2024: A.13.

 

Komentarų nėra: