"Artificial intelligence has dazzled the world in the past year, largely because of large language models like ChatGPT that seemingly converse with users. But this kind of AI isn't great at tackling hard problems in robotics, science and engineering.
To do this, AI needs to learn physics.
Getting AI to work in the real world could boost the range of our electric vehicles, improve care for cancer patients, and take on jobs that were previously done solely by humans. But creating systems that do this is tricky because it requires knowledge about both a specific field, and machine learning.
The results are worth it, say those adopting this approach. Starting with what we know about the world is what scientists and engineers do, after all.
There are a handful of names for this approach, including "physics-informed neural networks" and "scientific machine learning," but they all have one thing in common: They give AI someplace to start. That starting point is what we already know about a system, be it a bridge or a battery, from decades or even centuries of hard-earned knowledge. This framework helps limit the universe of solutions an AI has to experiment with before it can make useful predictions.
"It's sort of like if someone's trying to solve a maze, and there are certain paths you've already blocked off for them," says Karianne Bergen, who leads a machine-learning research group at Brown University.
Let's say you're trying to teach an AI how to direct a robot to walk. The early stages of this learning process have to happen in a simulation, because it's much faster than doing it with a real robot, and less costly, says Jonathan Hurst, chief robot officer of Agility Robotics. (Agility makes a bipedal robot that Amazon recently announced it will test in one of its warehouses.)
If the basic laws of physics are already built in, a machine-learning algorithm has to explore far fewer possible combinations of limb and body movements when figuring out how to direct a robot to walk. If the simulation didn't have those laws, the AI might come up with "correct" solutions that aren't plausible, like passing through solid objects, or misunderstanding gravity.
The idea that AI works best when the problem it's tackling is as narrowly defined as possible is a common theme in AI systems that generate measurable value for people and companies. Only in this case, the constraints are the laws of the natural world.
Physics-informed machine learning systems can make predictions using far less data than AIs that are naive about the real world, says Karen Willcox, director of the Oden Institute for computational engineering and sciences at the University of Texas, Austin.
A classic example of this approach is the "digital twin" of a jet engine, she says. Companies like General Electric have long used such models, which incorporate machine learning, to predict when an engine needs maintenance. What's changing now is that, with the growth of computing power and the spread of new kinds of machine-learning algorithms, this physics-informed approach is becoming much more widespread.
Using the same approach -- combining knowledge of the natural world, continuously gathered data, and machine learning -- it's in theory possible to create a digital twin of a cancer patient to direct their care, she adds. This is something that Willcox's research group is studying, in collaboration with the MD Anderson Cancer Center. So far, the team has only tested the approach "in silico" -- that is, on a pool of data drawn from representative patients -- but they are discussing a possible clinical trial.
In Formula E racing, which is the fully electrified version of Formula One, managing the amount of energy left in your battery is the difference between winning and losing. WAE Technologies, which makes the batteries for Formula E race cars, recently created a division, Elysia, to commercialize its power-management software that uses physics-informed neural networks.
Elysia's systems can determine the status of a battery with far less data than would normally be required, because these AIs already "know" a great deal about how batteries work. This allows engineers to push batteries closer to their limits, eking out more power without damaging them. The result could be more range from existing EV batteries, including the one in your driveway, says Elysia technical lead Tim Engstrom.
Dexterity, a robotics company, is combining machine learning with models of how boxes behave in the real world to create robots that can finally load trucks. (Unloading trucks, an easier problem, was solved first.) What made stacking boxes nearly impossible without these models was that objects in the real world don't always behave in an idealized fashion, says Samir Menon, chief executive of the company. They might weigh more or less than a robot expects, their contents might shift, or they might settle after they are dropped into place. Handling all of the weirdness of the real world requires a pretty good model of it, he adds.
It's still early days for physics-informed approaches to machine learning, say the experts I interviewed for this piece. Scientists are wary of the hype that comes with other forms of AI -- such as the chatbots and art-generating models that are currently all the rage, says Bergen. But they're also excited by the potential of scientific machine learning, which at its core can be a way to gain new insights about systems, especially when we don't yet fully understand them, she adds." [1]
1. EXCHANGE --- Keywords: Moving Boxes? Treating Cancer? AI Needs to Learn Physics First --- To change the world, artificial intelligence must learn not to walk through walls. Mims, Christopher. Wall Street Journal, Eastern edition; New York, N.Y.. 28 Oct 2023: B.2.
Komentarų nėra:
Rašyti komentarą