Sekėjai

Ieškoti šiame dienoraštyje

2026 m. sausio 6 d., antradienis

Hospitals Become Test Labs For AI


“Samir Abboud, chief of emergency radiology for Northwestern Medicine, thought he was already working at maximum speed. In a carefully honed routine, aided by voice dictation, he could finish writing an X-ray report in as little as 75 seconds.

 

Then the Chicago-based health system rolled out generative artificial intelligence in 2024 that can analyze patient scans and write reports. Abboud, who checks the AI work for potential changes, said reviews have sped up to about 45 seconds.

 

The result was breathtaking -- and startling. "It was the first time I felt like there was a clock on my career," Abboud said.

 

Still, he said humans are a necessary part of the process. And reading scans faster came with benefits, too.

 

"There's hundreds of patients waiting for our read, and any one of them could be one that's actively dying," Abboud said.

 

Big hospital systems have become the proving ground for widespread AI adoption, testing what the technology can do, but also revealing -- sometimes via alarming mishaps -- where it falls flat.

 

Among health systems, 27% are paying for commercial AI licenses, triple the rate across the U.S. economy, according to a Menlo Ventures and Morning Consult survey.

 

While the aging population's healthcare needs rise, hospitals are looking for ways to deal with persistent worker shortages that can burn out clinicians and delay care. They are also looking for efficiency while cuts to Medicaid loom.

 

AI has especially broken through in some of the least-flashy, but most labor-intensive, hospital tasks: taking notes, fielding patient phone calls and dealing with insurance claims.

 

These tasks are often "labor-dependent, with the same rote process done thousands of times," said Rupal Malani, a senior partner at consulting firm McKinsey who advises health systems on AI implementation.

 

Doctors still make medical decisions, though AI can aid the process. A University of California, Los Angeles, study found last year that AI was better able to identify subtle signs of breast cancer that can develop and grow undetected between routine screenings. The study estimated that using AI to help screen patients could reduce such breast cancers by 30%.

 

At the same time, there are reasons for caution.

 

Mayo Clinic cardiologist Paul A. Friedman turned to ChatGPT when he needed to weigh in on a patient who needed a defibrillator implantation a few days after having heart surgery. Friedman thought such a procedure was feasible and safe but wanted to know whether there were case studies. ChatGPT gave him references to several reports published in medical journals that it said showed such a procedure was "safe and effective." Friedman said "it looked very realistic" until a colleague tried searching for the studies only to discover they were fabricated.

 

After that, Friedman said, he takes a "trust but verify" approach. "It's not that I don't ask ChatGPT medical questions but, when I do, I always look for the references, click on them and read the abstracts at a minimum," he said.

 

The hospital's cardiology department is testing alternative in-house AI tools.

 

A representative for ChatGPT parent OpenAI said that its teams run "evaluations to reduce harmful or misleading responses," and that its latest models are much more able to provide accurate information than previous versions such as the one Friedman would have used. ChatGPT wasn't intended to be a substitute for guidance from health professionals, the company said.

 

An October study in The Lancet Gastroenterology & Hepatology found that physicians who used AI for three months to aid them in spotting growths during colonoscopies were able to detect significantly fewer such growths once the tool was taken away.

 

"I'm constantly worried about myself with deskilling," said Anthony Cardillo, a New York City pathologist who directs a Memorial Sloan Kettering laboratory specializing in blood samples. "Any time I outsource my thoughts to something that isn't my own brain, I'm worried I'm going to lose that muscle memory."

 

Cardillo said that he and his colleagues use generative AI to review specimens but do so only as a second pair of eyes after coming up with their own diagnoses.

 

Despite such concerns, health systems said they see tremendous promise -- and necessity.

 

"When you think about the tsunami of need that's coming as a society, technology is one of the only levers we have to pull," said Doug King, Northwestern Medicine's chief digital and innovation officer.

 

At Northwestern, an AI review of a million scans taken over a year highlighted 70 that humans hadn't flagged for further review. A manual check then showed five instances where physicians deemed that more follow-up was needed. Northwestern is also using another AI tool to schedule operating-room time more efficiently, which means more patients can be treated, officials there said.

 

Hospitals were early AI users. Predictive algorithms have powered early-warning systems for sepsis, flagged high-risk patients and helped manage scheduling for years.

 

In Northern California, Kaiser Permanente's 21 hospitals use a system that analyzes all patients' vitals and charts and scores them every hour to determine which patients are at highest risk. A study in the New England Journal of Medicine found the system saves more than 500 lives a year.

 

On a recent day, the system determined that a heart-failure patient required more scrutiny, leading physicians to learn he was also suffering from a severe respiratory virus and needed steroids for his lungs, said Vincent Liu, a pulmonary critical-care physician at Kaiser Permanente Santa Clara Medical Center.

 

Hospitals are aggressively adopting AI for tasks that are less flashy but take up a significant amount of time and resources. Electronic medical-record provider Epic Systems in 2024 launched a tool that uses generative AI to mine patient records and draft appeal letters to insurance companies. About 1,000 hospitals are already using the system, the company said.

 

Northwestern routinely has to appeal about 5-10% of the millions of claims it processes every year, said David Blahnik, vice president of information technology there.

 

"You're spending so much staff overhead and work trying to fight them and appeal to them and justify why we should get paid," he said. But, after adopting Epic's tool, staffers now spend about 23% less time processing each denied claim, Blahnik said.

 

A similar effort at New York's Mount Sinai has led to a 3% increase in insurance denials getting overturned, helping net the health system an added $12 million a year, said Lisa Stump, the chief digital information officer there.

 

Mount Sinai recently paused use of an Epic generative AI tool, which aimed to analyze messages patients sent to doctors and create personalized draft responses. Doctors said the drafts weren't helpful and required too much rewriting.

 

There were some very specific mishaps, according to Ankit Sakhuja, director of Mount Sinai's AI assurance lab. In one case, the system told a patient who asked for a walker or cane that it couldn't help. In another, a patient reporting a headache was given a verbose response that said the patient could have anything from something minor to a brain tumor.

 

Epic said that a small minority of hospitals have also paused use of the feature and that it is working to make improvements. The tool has helped nurses save as much as 30 seconds per exchange with patients through the system, Epic said, and has rolled out to about 1,700 hospitals. Still, the company said it requires human oversight.

 

"Clinicians have full control of the message that goes to the patient," said Seth Hain, Epic's senior vice president of research and development.” [1]

 

1. Hospitals Become Test Labs For AI. Te-Ping, Chen; Deng, Chao.  Wall Street Journal, Eastern edition; New York, N.Y.. 06 Jan 2026: A1.  

Komentarų nėra: