Sekėjai

Ieškoti šiame dienoraštyje

2023 m. spalio 12 d., ketvirtadienis

The Future of Everything: The Artificial Intelligence Issue --- Medical Devices Get Smarter. Can the FDA Keep Up? Use of AI means devices and apps can learn and change, making regulators' job harder.


"The use of artificial intelligence in medical devices is exploding, paving the way for potentially lifesaving advances -- and daunting challenges for regulators responsible for keeping new products safe.

If tech companies have their way, anxious patients may one day opt to talk to a chatbot instead of a human therapist. A patient checking into the hospital would be assessed with an AI-driven risk calculator, and an automated assistant would help do tests. Such AI-powered apps, when used for diagnosis or treatment, would qualify as medical devices under the oversight of the Food and Drug Administration -- meaning developers could need the agency's permission to market them.

The FDA is one of many U.S. agencies wrestling to find the right policies as the AI revolution picks up speed. From 2020 through late 2022, the FDA approved more than 300 devices with AI features -- more than during the entire previous decade. "We've got a speeding train coming in a tunnel and there's a challenge about to hit," says Amy Abernethy, who served as the FDA's principal deputy commissioner of food and drugs from 2019 to 2021 and is now chief medical officer at Verily Life Sciences, a unit of Alphabet that makes AI-enabled medical devices.

For decades, the agency looked at medical devices the same way it looks at drugs: as static compounds. When the FDA approves a device, the manufacturer can sell that version. It needs the regulator's signoff before upgrading to a new version. But AI-enabled devices often use algorithms designed to be updated rapidly, or even learn on their own. The agency is grappling with how to deal with this fast-moving technology while ensuring the devices stay safe.

"One of the key benefits of an AI-powered product is that it can be improved over time," says Tim Sweeney, CEO of Inflammatix, a developer of blood tests designed to predict the presence, type, and severity of an infection. If the company learned that a particular pattern of the body's immune response strongly indicates the onset of sepsis, for instance, it would want to retrain its algorithms to account for that. "If you have a lot of extra data, you should be improving your results," Sweeney says.

Under the FDA's traditional method of oversight, however, companies like Inflammatix would likely have to get additional permission before changing their algorithms.

The agency is beginning to point developers down an alternative path, in April offering formal guidance on how they can submit more flexible plans for devices that use AI. A manufacturer can file a "Predetermined Change Control Plan" that outlines expected alterations. FDA staff -- including lawyers, doctors, and tech experts -- review the plans and the scope of the expected changes. Once the device is approved, the company can alter the product's programming without the FDA's blessing, as long as the changes were part of the plan.

The goal is "to ensure that the market has some flexibility to continue to push the boundaries of the innovation that they are doing, while still adhering to the guardrails that we've established when we approve a product," says Troy Tazbaz, director of the FDA's Digital Health Center of Excellence.

Many device makers have cheered the FDA direction, though they have asked the agency for more clarity on the extent of changes it would allow. "It's going to be challenging to, up front, predetermine what are the possible changes that we would want to do," says Lara Jehi, who has helped develop AI-driven patient assessment tools as chief research information officer for the Cleveland Clinic Health System. "The only thing we know is predictable in science is that it is unpredictable."

NeuroPace, a maker of brain implants that treat severe epilepsy, would consider a "change control plan" as part of a new AI-enabled version of its device, says Martha Morrell, the company's chief medical officer.

The device now delivers electrical currents to stop a seizure, and those currents are only altered by a physician after reviewing the patient's data. In the future, Morrell says, the device could be given the power to interpret the patient's brain waves and decide what current to deliver in real time, subject to predetermined safety limits.

Morrell says that, as a "type-A doctor," even she feels uncomfortable ceding that kind of power to a device. But she also believes patients will benefit and that the FDA and the company need to figure out the standard necessary to be confident that the device will be safe.

After the latest FDA guidance on AI devices, she predicts, "there will be another one right behind it, and another one right behind that as well, as we scramble to keep right on top of this enormous change."

Critics see the FDA's new direction as potentially introducing risks, pointing out that there have been cases where medical-device software mistakes posed grave dangers to patients. AI systems add an additional challenge, they say, because the humans in charge of them don't always fully understand how they work and potential problems, such as racial bias, can be difficult to measure. "AI brings with it very different kinds of harms -- algorithmic harms that are difficult to understand and explain," says Mason Marks, a health-law professor at Florida State University. By allowing changes to complex algorithms without going through regulatory scrutiny, he says, "the FDA is giving the manufacturers a lot of leeway."

An FDA spokesman said the agency's evaluation of devices addresses potential bias.

Those concerns could, in the future, be addressed by a new policy some FDA officials have discussed: requiring real-time monitoring of AI devices after they go on the market. That would look different from the traditional approach, where the maker of an FDA-approved device must simply report "adverse events" that could affect patients. New monitoring requirements would aim to ensure manufacturers are closely watching their algorithms' performance. But adopting them would likely require legal authority the FDA doesn't have. Industry groups might lobby against the agency gaining that authority, if they view the requirements as overly intrusive.

These debates will only increase in urgency as more developers incorporate generative AI systems that can produce humanlike outputs of text, photos, videos and other media.

Woebot Health offers a bot that it hopes psychiatrists, doctors, or insurers will recommend to help patients work on mental-health issues. The bot offers encouragement and responds 24/7. As a safety measure, Woebot says, its responses are limited to a selection preapproved by humans. The company is testing the use of a generative AI model to help understand what users say, but is wary of allowing open conversations.

Woebot markets its chatbot as a mental-health support app, avoiding using certain terms, such as "treatment," that could trigger the need for an FDA review. The agency has authority to regulate any device intended to be used for the diagnosis, treatment or prevention of disease -- a broad remit that applies to many medical-software functions. It doesn't enforce its rules in what it views as low-risk situations, such as wellness apps.

Woebot says it wants to seek the FDA's stamp of approval. But it also wants to be able to market its product without a prescription, reaching a much broader user base. FDA-approved "digital therapeutics" for mental health generally aren't cleared for sales directly to consumers. "How can we make sure these tools are clinically validated [but do] not necessarily require a prescription?" says Robbert Zusterzeel, Woebot's vice president of regulatory science and strategy. "There's this middle option at some point the FDA needs to look at."

That points to another potential effect of the FDA's policies: The agency's regulatory boundaries end up shaping product development.

"It creates a funny set of incentives." says Ariel Dora Stern, a professor at Harvard Business School who has studied FDA regulation. "I've talked to startups that are basically going after smaller problems than their algorithms can go after, simply because regulation is something that they don't want to go near with a 10-foot pole."" [1]

1. The Future of Everything: The Artificial Intelligence Issue --- Medical Devices Get Smarter. Can the FDA Keep Up? Use of AI means devices and apps can learn and change, making regulators' job harder. Ryan, Tracy.  Wall Street Journal, Eastern edition; New York, N.Y.. 12 Oct 2023: R.8.   

Komentarų nėra: