"“Artificial intelligence (A.I.) is already having a
significant impact on the economy, and its influence is expected to grow
significantly in the coming years …. Overall, the effects of A.I. on the
economy will depend on a variety of factors, including the rate of
technological advancement, government policies and the ability of workers to
adapt to new technologies.”
OK, who said that? Nobody, unless we’re ready to start
calling large language models people. What I did was ask ChatGPT to describe
the economic effects of artificial intelligence; it went on at length, so that
was an excerpt.
I think many of us who’ve played around with large language
models — which are being widely discussed under the rubric of artificial
intelligence (although there’s an almost metaphysical debate over whether we
should call it intelligence) — have been shocked by how much they now manage to
sound like people. And it’s a good bet that they or their descendants will
eventually take over a significant number of tasks that are currently done by
human beings.
Like previous leaps in technology, this will make the
economy more productive but will also probably hurt some workers whose skills
have been devalued. Although the term “Luddite” is often used to describe
someone who is simply prejudiced against new technology, the original Luddites
were skilled artisans who suffered real economic harm from the introduction of
power looms and knitting frames.
But this time around, how large will these effects be? And
how quickly will they come about? On the first question, the answer is that
nobody really knows. Predictions about the economic impact of technology are
notoriously unreliable. On the second, history suggests that large economic
effects from A.I. will take longer to materialize than many people currently seem
to expect.
Consider the effects of previous advances in computing.
Gordon Moore, a founder of Intel — which introduced the microprocessor in 1971
— died last week. He was famous for his prediction that the number of
transistors on a computer chip would double every two years — a prediction that
proved stunningly accurate for half a century. The consequences of Moore’s Law
are all around us, most obviously in the powerful computers, a.k.a.
smartphones, that almost everyone carries around these days.
For a long time, however, the economic payoff from this
awesome rise in computing power was surprisingly elusive. Here’s a chart of the
long-run rise in labor productivity — output per hour in the nonfarm sector —
measured as the annual rate of growth over the previous 10 years (to smooth out
some of the noise).
I’ll explain some of what’s in the chart in a minute. But
the first thing to notice is that for at least two decades after Moore’s Law
kicked in, America, far from experiencing a productivity boom, suffered from a
protracted productivity slowdown. The boom kicked in only during the 1990s, and
even then it was a bit disappointing, as I’ll also explain in a minute.
Why did a huge, prolonged surge in computing power take so
long to pay off for the economy? In 1990 the economic historian Paul David
published one of my favorite economics papers of all time, “The Dynamo and the
Computer.” It drew a parallel between the effects of information technology and
those of an earlier tech revolution, the electrification of industry.
As David noted, electric motors became widely available in
the 1890s. But having a technology isn’t enough. You also have to figure out
what to do with it.
To take full advantage of electrification, manufacturers had
to rethink the design of factories. Pre-electric factories were multistory buildings
with cramped working spaces, because that was necessary to make efficient use
of a steam engine in the basement driving the machines through a system of
shafts, gears and pulleys.
It took time to realize that having each machine driven by
its own motor made it possible to have sprawling one-story factories with wide
aisles allowing easy movement of materials, not to mention assembly lines. As a
result, the big productivity gains from electrification didn’t materialize
until after World War I.
Sure enough, as David, in effect, predicted, the economic
payoff from information technology finally kicked in during the 1990s, as
filing cabinets and secretaries taking dictation finally gave way to cubicle
farms.
(What? You think technological progress is always
glamorous?) The lag in this economic payoff even ended up being similar in
length to the lagged payoff from electrification.
But this history still presents a few puzzles. One is why
the first productivity boom from information technology (there may be another
one coming, if the enthusiasm about chatbots is justified) was so short-lived;
basically it lasted only around a decade.
And even while it lasted, productivity growth during the
I.T. boom was no higher than it was during the generation-long boom after World
War II, which was notable in the fact that it didn’t seem to be driven by any
radically new technology. (That’s why it’s marked with a question mark in the
chart above.)
In 1969 the celebrated management consultant Peter Drucker
published “The Age of Discontinuity,” a book that correctly predicted major
changes in the economy’s structure, yet the book’s title implies — correctly, I
think — that the preceding period of extraordinary economic growth was actually
an age of continuity, an era during which the basic outlines of the economy
didn’t change much, even as America became vastly richer.
Or to put it another way, the great boom from the 1940s to
around 1970 seems to have been largely based on the use of technologies, like
the internal combustion engine, that had been around for decades — which should
make us even more skeptical about trying to use recent technological
developments to predict economic growth.
That’s not to say that artificial intelligence won’t have
huge economic impacts. But history suggests that they won’t come quickly.
ChaptGPT and whatever follows are probably an economic story for the 2030s, not
for the next few years.
Which doesn’t mean that we should ignore the implications of
a possible A.I.-driven boom. Large language models in their current form
shouldn’t affect economic projections for next year and probably shouldn’t have
a large effect on economic projections for the next decade. But the longer-run
prospects for economic growth do look better now than they did before computers
began doing such good imitations of people.
And long-run economic projections matter, even if they’re
always wrong, because they underly the long-term budget outlook, which in turn
helps drive current policy in a number of areas. Not to put too fine a point on
it, but anyone who predicts a radical acceleration of economic growth thanks to
A.I. — which would lead to a large rise in tax receipts — and simultaneously
predicts a future fiscal crisis unless we make drastic cuts to Medicare and
Social Security isn’t making much sense."
Komentarų nėra:
Rašyti komentarą