"Much has been written about what the U.S. withdrawal from Afghanistan means for the future of that country and America’s global standing. But the failures of the war reveal a need for deeper introspection into what has gone wrong with American democracy and its institutions — including the story of failed expertise.
In 2006, Generals James Mattis and David Petraeus convened a conference to develop a “counterinsurgency doctrine” (because the topic had been neglected in broader military doctrine and national security policies since the Vietnam War). A document followed, Army Field Manual 3-24, based on academic gatherings, articles and books devoted to the topic. A few years later, President Barack Obama faced intense pressure from the foreign policy establishment and members of the military to send a “surge” of money and troops to Afghanistan. The emerging paradigm of “counterinsurgency theory” had given this campaign a seemingly solid intellectual basis. It could help the push, in the words of then-Vice President Joe Biden, to “box in” an inexperienced president and send more troops, as happened in 2009.
This is just one chapter in a larger story. At many points in the war, the coalition had access to the insights of people who had graduated from the world’s best universities and brought highly specialized knowledge to issues (state building, counterterrorism) that the United States was facing in Afghanistan. The last president of the American-backed government, Ashraf Ghani, has a Ph.D. from Columbia and was even a co-author of a book titled “Fixing Failed States.” But for all their credentials, they were not able to stop a swift Taliban takeover of the country.
What Afghanistan shows is that we need a new definition of expertise, one that relies more on track records and healthy cognitive habits and less on credentials and the narrow forms of knowledge that are too often rewarded.
In an era of populism and declining trust in institutions, such a project is necessary to put expertise on a stronger footing.
It’s true that many experts also opposed the Afghanistan war and thought that the United States was seeking unrealistic goals.
But individuals with the most subject-matter expertise often tended to get things the most wrong.
This included generals with experience in counterinsurgency in Iraq and Afghanistan as well as many think tank analysts with the most focus and interest in those conflicts.
Perhaps we shouldn’t be surprised. Philip Tetlock, a psychologist, has famously shown that subject-matter experts are no better at accurately forecasting geopolitical events relevant to their field than those with training in different areas. Similarly, in a different study, the intelligence community, with access to classified information, proved less accurate than an algorithm weighted toward the views of amateurs with no security clearances but a history of making accurate forecasts.
So “just trust the experts” is the wrong path to take. But simply deciding to ignore them can lead us down rabbit holes of conspiracy theories and misinformation. The subject-matter experts in Mr. Tetlock’s research couldn’t beat informed amateurs, but they did defeat random guessing, or the epistemological equivalent of monkeys throwing darts.
This is in part because the divisions we create between fields are, in a sense, artificial. As radical as it sounds, just because someone has a Ph.D. in political science or speaks Pashto does not make that person more likely to be able to predict what is going to happen in Afghanistan than an equally intelligent person with knowledge that appears less directly relevant. Anthropology, economics and other fields may offer insight, and it is often difficult to know ahead of time which communities of experts have the most relevant training and tools to deal with a particular problem.
Academia is in some ways nearly ideally suited to produce the wrong kinds of expertise. Scholarly recognition is based on high degrees of specialization, obtaining the right pedigree and the approval of colleagues through peer review rather than through an external standard.
A program to put expertise on a stronger footing should involve both new laws and changes in the wider intellectual culture.
Government should set up forecasting tournaments and remove regulatory barriers to establishing prediction markets, in addition to funding them through programs like DARPA and the National Science Foundation.
Robin Hanson, an economist, has suggested conditional markets, which would take bets on, say, what will happen to the price of a stock if a C.E.O. is removed or the impact on gross domestic product that the adoption of a bill or regulation will have, and then using the results to inform decisions like removal or adoption.
In the same way that the business press reports on stock prices and political reporters use betting markets to discuss possible election outcomes, conditional markets can provide information on the wisdom of proposed policies. Pundits debate questions like how much inflation would result from President Biden’s signing a new infrastructure bill, but there is no reason to rely solely on these largely unaccountable voices to forecast outcomes. We can most likely get better results by letting people bet on their beliefs — and then using that data to inform debate.
A wide body of research shows that prediction markets almost always either tie or beat institutions like polls and committees in terms of accuracy.
The British government in 2020 started a website that invites individuals to make predictions and ranks them based on accuracy; this way in future crises, it could consult the best forecasters.
The United States should encourage similar projects and reduce the power of credentials in other ways by, for example, relying on objective tests rather than degrees to hire people or by removing occupational licensing requirements and giving market forces more of a role to play in systems that now reward educational attainment.
Public intellectuals and the media can do better by making use of tools based on the principle of accountability and recognizing the mantra of “trust the experts” as an appeal to authority rather than excellence.
In an ideal world, the greatest sin for an intellectual would not be getting something wrong but speaking on an issue in a way that makes it impossible to judge accuracy in the first place.
Changing how we think of expertise can lead to greater trust across partisan and educational lines, as processes for awarding power and prestige would come to rely more on proven ability and less on the approval of elite institutions lacking either ideological or socioeconomic diversity.
To see the American failure in Afghanistan as providing lessons about only one particular war, or even U.S. foreign policy, would be a missed opportunity. Along with the rise of China and our shortcomings in dealing with Covid-19, it should provide the motivation for new thinking about where our institutions have gone wrong.
A new project to get expertise right would be going against the grain in a society largely built on certain kinds of credentials. But for those interested in the health of American democracy and its continued viability, there can be few things as important."
Good proposals, however, are only being implemented politically if the West begins to face the possibility of an existential catastrophe. In all other cases, with the help of tutors and good universities, we make from our rich children and grandchildren those experts who cannot be trusted and give such clueless grandchildren - gabrieliai landsbergiai - the power of decision. We do not know how to share the privileges in any other way.
Komentarų nėra:
Rašyti komentarą