Sekėjai

Ieškoti šiame dienoraštyje

2026 m. gegužės 4 d., pirmadienis

Is AI Smarter Than People? It's Complicated. --- As a neuroscientist, I conducted research into artificial versus human intelligence. The results surprised me -- and suggest we've been worrying over the wrong things.

 

“Who's smarter, the human or the machine?

 

In the 30 years I've worked in artificial intelligence, that's been the question driving the conversation.

 

We've also been sold a story about AI that goes something like this: It will handle the tedious, routine work -- the research, the first draft, the number-crunching -- while we focus on the interesting parts: creativity, judgment, the human touch.

 

My research suggests we've been asking the wrong question and drawing the wrong conclusions.

 

A few months ago, I recruited adults from San Francisco's Bay Area for an experiment. I gave each group one hour to make predictions about real-world events, using scenarios drawn from the prediction market platform Polymarket. This provided us a rigorous, objective way to check results against the collective wisdom of thousands of financially motivated forecasters. In addition to AI making predictions on its own, some human teams worked alone, while others worked as human-AI hybrids. (Polymarket has a data partnership with Dow Jones, the publisher of The Wall Street Journal.)

 

The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning. The large AI models -- ChatGPT and Gemini, in this case -- performed considerably better, though still short of the market itself.

 

But when we combined AI with humans, things got more interesting.

 

Most hybrid teams used AI for the answer and submitted it as their own, performing no better than the AI alone. Others fed their own predictions into AI and asked it to come up with supporting evidence. These "validators" had stumbled into a classic confirmation bias-loop: the sycophancy that leads chatbots to tell you what you want to hear, even if it isn't true. They ended up performing worse than an AI working solo.

 

But in roughly 5% to 10% of teams, something different emerged. The AI became a sparring partner. The teams pushed back, demanding evidence and interrogating assumptions. When the AI expressed high confidence, the humans questioned it. When the humans felt strongly about an intuition, they asked the AI to come up with a counterargument.

 

The hybrids were becoming cyborgs.

 

These teams reached insightful conclusions that neither a human nor a machine could have produced on its own. They were the only group to consistently rival the prediction market's accuracy. On certain questions, they even outperformed it.

 

It's not that these people were more intelligent than the others in the study. But they demonstrated two important qualities: perspective-taking and intellectual humility.

 

Perspective-taking is the ability to genuinely inhabit another point of view. Not to debate it, not to tolerate it, but to actually inhabit it.

 

Intellectual humility is the ability to recognize the edge of your own knowledge and sit with that discomfort rather than trying to rush to fill it.

 

Both of these qualities are, at root, emotional skills.

 

Perspective-taking requires genuine curiosity about minds other than your own.

 

 Intellectual humility requires a kind of emotional courage: the willingness to feel uncertain, even a little foolish, in the presence of something or someone that seems very sure of itself.

 

These are not the soft skills we typically celebrate. We celebrate confidence. We promote decisiveness. We are building AI systems specifically designed to give us the answer before we feel the discomfort of not having it.

 

What my experiment suggests is that the human qualities most likely to matter are not the feel-good ones. They're the uncomfortable ones: the capacity to be wrong in public and stay curious; to sit with a question your phone could answer in three seconds and resist the urge to reach for it. To read a confident, fluent response from an AI and ask yourself, "What's missing?" rather than default to "Great, that's done." To disagree with something that sounds authoritative and to trust your instinct enough to follow it.

 

We don't build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways: the student who struggles through a problem before checking the answer; the person who asks a follow-up question in a conversation; the reader who sits with a difficult idea long enough for it to actually change one's mind. Most AI chatbots today default to easy answers, which is hurting our ability to think critically.

 

I call this the Information-Exploration Paradox. As the cost of information approaches zero, human exploration collapses. We see it in students who perform better on AI-assisted tasks and worse on everything afterward. We see it in developers shipping more code and understanding it less. We are, in ways that feel like progress, slowly optimizing ourselves out of the loop.

 

This is the divergence I worry about. Not the dramatic science-fiction scenario of AI replacing humans wholesale, but the quieter process of people gradually outsourcing their judgment in increments too small to notice.

 

Over time, this produces two kinds of people: Those who use AI as a genuine intellectual partner -- whose thinking actually gets sharper through the friction of the collaboration -- and those who get better at securing quick answers and worse at knowing what questions to ask.

 

So what can any of us actually do about it?

 

Start with the reframe: The goal of working with AI isn't to get the answer faster. It's to find out what you're missing. Don't deploy AI minions to "do the boring work" for you, as so many sales pitches argue; use it as a savant collaborator to explore uncertainty.

 

In practice, that means before you accept an AI's answer, ask it for the strongest argument against itself. When it hedges or qualifies, pay attention -- that's usually where the real uncertainty lives. Treat it like a brilliant colleague who has read everything and understands nothing -- useful precisely because they're different from you, not because they'll agree with you.

 

For the AI industry, a key design question has gone largely unasked: Is the product building human capacity or consuming it? Nearly all AI benchmarks measure what AI agents can do alone. We desperately need benchmarks for hybrid intelligence. Errors are signals our brains use to trigger learning. An AI that eliminates friction entirely is often eliminating the learning along with it.

 

A hopeful finding is that perspective-taking, intellectual humility and curiosity are not fixed traits. They can be cultivated and respond to practice, the right relationships and environments that reward uncertainty.

 

But they require us to decide -- as individuals, as parents, as educators, as designers of tools -- that this is what we're trying to build. And in the race between human potential and human atrophy, the stakes for building it could not be higher.

 

---

 

Vivienne Ming is a theoretical neuroscientist, cognitive scientist and the author of "Robot-Proof: When Machines Have All the Answers, Build Better People."” [1]

 

1. REVIEW --- Is AI Smarter Than People? It's Complicated. --- As a neuroscientist, I conducted research into artificial versus human intelligence. The results surprised me -- and suggest we've been worrying over the wrong things. Ming, Vivienne.  Wall Street Journal, Eastern edition; New York, N.Y.. 25 Apr 2026: C4.  

Komentarų nėra: