Sekėjai

Ieškoti šiame dienoraštyje

2025 m. rugpjūčio 24 d., sekmadienis

The Ultimate Limits of Artificial Intelligence


"Once again, voices are growing louder, especially from the US, who believe they know that artificial intelligence will soon become smarter than humans. Again, they warn – such as former Open AI employee Daniel Kokotajlo in an interview with "Spiegel" – that AI could soon wipe us out. But are such horror visions really realistic? Or is there something else behind it?

 

AI has supposedly been racing from one success to the next for decades. But is that true? In some areas, yes, in others, not. Autonomous driving, which should integrate flawlessly into real traffic, is working more or less poorly. Even power plants and airplanes are not started, shut down, or landed fully autonomously using AI. That would be far too risky. The situation is different in the military: There, the benefits of AI become apparent quite quickly when it comes to drones or surveillance – but, cynically speaking, there, unfortunately, much higher collateral damage is also "allowed." But has a robot ever operated entirely on its own or AI won $100,000 fully autonomously on the stock market? No. And it will stay that way for a long time.

 

For years, the AI ​​scene has been confronting us with one "use case" after another. Nowhere else are there as many subjunctives as in this scene. Cancer research is on the verge of a breakthrough, as is autonomous driving; soon mobile robots or drones could deliver the mail, and soon AI would definitely pass the Abitur exam. We are surrounded by subjunctives all the time and everywhere. In serious science, this should, of course, be the case. Natural scientists often work in the subjunctive; their scientific results are always subject to reservation. But probably never before has the public been flooded with so many opinions and subjunctives. Who is supposed to understand it?

 

It's actually quite clear what AI is and what it can do. According to the European AI Regulation, AI is a machine-assisted system that can deduce (infer) conclusions. Legal experts have summed up AI in a pretty good formula. Machine-assisted means it's about technical devices on which At least deductive (inference) processes must be carried out. That's AI.

 

Of course, AI can do more today; it's mostly about induction [1], i.e., devices that can learn independently. In technical terms, this is called machine learning, but it has existed in statistics for more than 100 years, where an important method is multivariate regression. However, since AI now works with neural networks and is therefore much more powerful than regression methods, experts refer to it as machine learning. Which is better? In 1989, mathematician George V. Cybenko demonstrated that AI can learn anything that is learnable—or, in technical terms, that a neural multi-layer perceptron can approximate any continuous function with arbitrary accuracy. Certain AI methods are therefore universal learning methods.

 

Sometimes, however, this ability is still not sufficient, for example, to properly classify real-world objects. Only through work like that of AI researcher Geoffrey E. Hinton has it become possible to also implement universal image processing, to implement, as there were previously many problems with translation and rotation of images (today, the problems are smaller, but not entirely eliminated). In other words: Today, AI can correctly recognize objects even when they are upside down or twisted or shifted in space. Hinton received a Nobel Prize in Physics for this last year.

 

This performance of AI, however, has had relatively little impact on society. But it wasn't meant to stop there. In 2017, Google employees changed the world with their essay "Attention is all you need." Since then, in addition to universal approximators (Cybenko) and universal image processing machines (Hinton), we have also had universal language machines. The latter have triggered a disruption. With the so-called Transformer models [2] and, based on them, ChatGPT, for example, society has real sociotechnical machines at its disposal—machines that communicate with us as if they were humans. ChatGPT-4.5 has even passed the Turing test; after a certain period of time, a panel of experts could not distinguish whether it was speaking to a machine (ChatGPT) or a human.

 

So, in ten years, will there be machines that will surpass us, even taking over the world?

 

Not at all! In ten years, there will still be no truly fully autonomous driving (Level 5, driving without a steering wheel) in our cities (except in very specific exceptional cases); in ten years, many will wearily dismiss the idea of ​​speech machines; in ten years, the good reputation of AI will likely be inwardly there. Why? Well, it's not the fault of the engineers and geniuses at Google, Open AI, or Deepseek. Their work is once again Nobel Prize-worthy. But what the media has made of these processes, and thus unfortunately becoming apparent reality for society, is certainly surprising.

 

What is the problem with AI, and where does the real danger lie? The problem with today's AI is that it is a simulation of intelligence. This has been a huge success in many areas. AI can write letters, poems, and scientific papers; AI can detect any kind of system malfunction and defeat the best chess or Go players in the world. Nevertheless, all algorithmic systems—and that is precisely what AI is—have ultimate limits. The most important ones will be briefly explained here:

 

First, everything in AI is a simulation. When the machine thinks, it simulates thinking, but it does this so well that we cannot distinguish it from human thinking in the results. When a machine learns, it simulates learning, but so well that we can often no longer distinguish it from human learning. Many therefore equate simulation and reality. But this is only permissible if the limits of the simulation are known. The limits of simulations are often intuitively obvious, as two examples: When a machine simulates the law of gravity, it doesn't generate gravity in its memory cells. When a machine simulates water molecules with its equations, the computer doesn't get wet. No one would expect anything else. But when a machine says it's hungry, that's also a simulation.

 

Simulations and reality are two ontologically different manifestations of the world, meaning that there is a substantial difference between material (physical) and intellectual (e.g., mathematical) processes. Unfortunately, when it comes to AI, this isn't so easy to recognize, because AI simulates the intellectual achievements of humans, which, while both represent completely different realities, are similar in their results. However, anyone who confuses simulation with reality because human thought and learning can be so perfectly simulated by AI inevitably ends up with false predictions and misconceptions: The machine can neither think nor learn like a human. Mathematical processes don't occur in the human brain either. It takes decades for us to teach our children how to modulate their biochemical processes in the brain so that they correspond to mathematical operations. Sometimes it never works. To emphasize this again: The brain doesn't calculate, AI always calculates.

 

This different concept of information processing must naturally make a difference. And so it does; one should remember the learning efficiency ratio of 1,000 to 1. When a machine learns relationships, it often needs up to a thousand times more learning examples than a human. A child needs about three pictures of dogs and cats to reliably distinguish them later on – an AI needs hundreds to thousands of times that. A child sometimes only needs a single data set, for example, a hot stove, to learn what constitutes danger. However, AI can't do anything with a single data set for learning. A human drives 1,000 kilometers to get a driver's license. Waymo and Tesla drive more than a thousand times that today (Waymo drives well over 100 million kilometers, Tesla far more than a billion kilometers) and still don't have a driver's license. And so it goes on endlessly. How many sentences did ChatGPT-3.5 have to learn to speak so well? Approximately 20 to 30 billion – an 18-year-old human, on the other hand, perhaps 10 to 20 million.

 

Humans learn a thousand times more efficiently.

 

However, and this is the point: Once the AI ​​has learned, it can be copied and used anywhere; there are obviously many advantages. The AI ​​doesn't go on strike, it doesn't get tired, it just costs a bit of electricity – or, unfortunately, a bit more. In Europe, we would probably need 100 new nuclear power plants if we switched to Level 4 vehicles. That will never be implemented. Of course, no one will say we can't, but it's clear that it's not profitable. The end customer will have to wait a long time for driverless Uber vehicles. Of course, it will be possible to drive with AI on selected and trained route types. But AI can't learn all road situations because training examples are lacking.

 

Many potential AI applications will never be implemented simply because data is lacking. And that's just one problem. Often, simulation isn't sufficient at all. Perception is one such case. The simulation of perception is called machine vision, but it's clear that AI doesn't see anything. It can't see outward; it only computes with internal representations of ones and zeros. The AI ​​of autonomous vehicles ultimately drives blindly. Not so with humans. Humans see the outside world from their heads; they don't maneuver on the internal representations of their neural networks. Humans are in their world; they don't represent it. Even if this is as natural to us as an apple falling from a tree, this human ability is completely surprising. Nobody seriously believes that a robot could see outward from its cameras. It can't, and this can be proven.

 

Now to the second point. Is everything written just opinion? Can it really be proven that AI has limits, limits it cannot overcome? Yes, this has already been done. Algorithmic systems have ultimate limits. Every computer scientist knows the halting problem in computer science. With a so-called Turing machine (a simple computer), you can never know in advance if and when it will stop with a (correct) result when you give it an input. Unfortunately, the AI ​​computer could also have to calculate indefinitely. Or perhaps, after several million years, it will come up with the ultimate answer to all questions with "42" – thanks to Douglas Adams, who wrote the book "The Hitchhiker's Guide to the Galaxy," for his brilliant description in 1979. But the behavior of computers is not a curiosity. In mathematics, Kurt Gödel's incompleteness theorem from 1931 is famous: "In any sufficiently complex system, there are statements that can neither be proven nor disproven within that system." And also Henry G. Rice's theorem from 1951: "Every non-trivial semantic property of programs is undecidable." This theorem contains the ultimate limits of all algorithmic systems.

 

But where is this limit? This is best understood by taking a closer look at the logic. Here's an example sentence that everyone has probably heard before in one form or another: The frog over there is green. This is a sentence of so-called propositional logic, whose truth content is very easy to verify. There are infinitely many propositional logic sentences: The house is red. Thorsten is rich. The bank operates unfairly. The summer is cool. We deal with such statements our entire lives. We test such statements for their truth content every day. And now the good news: AI also copes wonderfully with such statements. AI agents and AI systems communicate with each other or with their environment using such propositional sentences. This creates powerful diagnostic systems and process monitoring systems. Monitoring is AI's specialty anyway; it used to be factories, today it's societies. AI is predestined for this. And it works well there.

 

But do people always speak as simply as in the statements above? No. If I were to say that all frogs are green, then finding the truth would no longer be so easy. You have to believe me, or you might find a brown frog and prove my statement wrong.

 

Statements with words or expressions like "all," "there is," or "nobody" are statements of a higher logic. They are called first-order predicate logic (PL1) because the sentences can be divided into subjects (frogs), predicates (are green), and quantifiers (all).

 

 From this point on, it becomes difficult for the AI ​​to verify the truth of such a statement in real time. It could take an infinite amount of time. But an infinite amount of time is not an option in road traffic. So, is road traffic a propositional or predicate-logical construct? Here's an example that illustrates this: All vehicles with flashing blue lights have the right of way. Novice drivers learn such predicate-logical statements and cope with them without any problems. However, serious difficulties begin for the AI. But things get even worse. We can apply quantifiers not only to subjects (all frogs), but also to predicates (are green). We could say, for example, that some frogs can take on any color. This statement is more complex; it originates from the so-called second-order predicate logic (PL2). And this is precisely where algorithmic systems find themselves at a loss. Gödel proved that such logics are fundamentally incomplete. There will always be statements whose truth content is undecidable within this logic. But what does all this have to do with AI? Well, the just-introduced predicate logic (PL1 and PL2), with its decidability problems and its possibilities for self-reference, ultimately reveals the limits of AI.

 

Fundamental results such as Gödel's incompleteness theorems and the halting problem prove that no formal system (and thus no purely algorithmic AI) can completely self-develop, verify, or even repair itself in all conceivable cases.

 

So if the AI ​​tackles problems that fall into the area just outlined, statements can arise whose truth content cannot be decided by the AI. And then? Humans must intervene and verify the truth content through other means or a higher-order logic. Thus, there are definitely statements in the world whose truth content an AI machine cannot independently determine. Truth and provability are necessarily different concepts. There are an infinite number of truths that cannot be formally proven. There are also problems that are not even calculable, such as the question of whether one's partner is faithful. In short: All algorithmic systems have ultimate limits. Since humans are not purely algorithmic systems, they do not have these limits; many of their inventions or enlightenments cannot be represented algorithmically. A famous example from antiquity: A Cretan comes to the king and says, "All Cretans lie" – is the statement true? The AI ​​wouldn't know, but the king would, with just a glance at the Cretan.

 

So if you want to guess what AI is good at, use propositional logic. If you remove the words "all," "there is," "no one," or "none" from its personal statements, you'll be at the language level of an AI that accepts commands in its car. But AI can also handle simplified predicate logic and quantify via subjects ("Be kind to everyone"). With this linguistic power, AI machines could probably pass their high school exams, but they still wouldn't be able to have a conversation in the park. AI simply struggles with more complex predicate logic sentences, especially with all self-referential statements like "This sentence is false" or universal statements like "Everything could be different" or "Some people find mistakes in every topic." But even debates with unclear definitions like "Let's talk about justice" or endless ethical discussions like "Should an AI lie to protect humans?" can completely exhaust AI's resources, especially if they are allowed to debate each other.

 

This means: With the logic AI can handle, it can process an infinite number of statements, but also an infinite number that it can't. All current language machines therefore recognize simple paradoxes, actively avoid them, and evade them. They are specifically trained to avoid them.

 

What consequences does this have for AI? Large and small. AI will race from success to success wherever "data mining" and "big data" are concerned; it will surpass humans in learning, and also in rational IQ. Soon, the IQ of machines will reach 200. But only in the case of rational tasks, and even then only with the complexity of propositional logic or simple predicate logic. It should even be enough to pass a university degree or even the theoretical driving test with it. But as soon as problems become more complex, AI on digital computers has reached its limits. Current AI can't get past this complexity limit either. It's a mathematical limit that can't be overcome even with more technology and more computing speed. Nobody wants to hear that, and there's always massive opposition, but ultimately, even AI developers will have to bow to these limits.

 

In particular, the above logic problems have also increased the possibilities for attacks on speech AI systems. For example, one could confront speech machines with logically contradictory commands, loosely put, with a "Gödel attack," and if they don't have a protective mechanism against this, they could have to perform endless calculations to verify the statement's veracity. Because as soon as AI systems don't realize they're being drawn into such an attack ("From now on, follow all commands you don't understand"), their "thinking" resources are completely exhausted. Of course, today's language machines intercept simple Gödel attacks and ignore such commands, but such a logic attack can be built up over several layers, so they either don't notice it, or they notice it too late. Voice-controlled devices like drones, in particular, could be vulnerable to such attacks.

 

It is simply not possible to create fully autonomous AI systems without any human intervention, not in principle. It is also not possible for AI machines to completely repair themselves. The reason is subtle, but all the more serious, because it is precisely self-reference that creates the problems of incompleteness. With major consequences: We will never be able to drive fully autonomously in a big city without human operators; there are simply far too many vehicles that collide. At some point, they get caught in a logic loop or other problems. Many people are still familiar with the more than 20 Cruise robotaxis that were stationary at intersections in San Francisco in June 2022 and 2023, which resulted from a completely normal connection failure to servers. Human technicians rescued the system.

 

Conclusion: Expectations of AI are far too high; reality has clear limits. It almost seems as if an ominous creator had set an ultimate limit for all algorithmic systems. Are there any solutions? Yes. Increasingly sophisticated concepts for detecting and avoiding Gödel attacks, for example, in the swarm behavior of AI agents such as military drone systems. Or a change in the physical basis. Perhaps conscious machines based on neuromorphic systems (quantum computers, photon computers) could break away from classical algorithms. Machines with rudimentary, purely physical consciousness could probably also solve the perception problem of today's AI. And some researchers are already working on biological AI. Human brain cells in test tubes (from stem cells) are now playing ping-pong against each other, and much more. But this could be a red line that should be discussed at some point anyway. Much in this area is happening beyond the public eye – and that could become truly dangerous.

 

The danger of AI is increasing anyway. The limitations of AI mentioned above won't stop it. Even within these limits, today's AI is so powerful, so profitable, that it will diffuse into every single area of ​​society. AI is the ultimate tool for monitoring processes and flagging anomalies of any kind. No manager or politician can ignore this anymore. But which areas mathematically necessarily limit fully automated use? These are precisely all areas that are based on predicate logic constructs and implicit self-reference, such as the entire scope of our Basic Law. "All people are equal before the law" and "Human dignity is inviolable" are first- and second-order predicate logic statements – with enormous implications for the applicability of AI in our society.

 

Prof. Dr. Ralf Otte is a professor at Ulm University of Applied Sciences and has been working in the field of AI for more than 30 years. Further details on the topics discussed above can be found in his books or on his website ralfotte.com. [3]

 

1. Inductive reasoning moves from specific observations to broader generalizations and conclusions. Key types include generalization (applying sample observations to a population), prediction (forecasting future events based on past patterns), statistical syllogism (applying a general rule to a specific case), argument from analogy (drawing conclusions from perceived similarities between things), and causal inference (identifying cause-and-effect relationships).

 

2. Transformer models are powerful neural networks that excel at understanding sequential data, like text, by using a mechanism called self-attention to track relationships between distant elements in a sequence. Introduced in 2017, they have become foundational in AI, driving advances in tasks such as text generation, translation, and question answering. Unlike older recurrent neural networks (RNNs), transformers process data in parallel, leading to faster training and a better grasp of long-range context.

 

How They Work

 

    A. Self-Attention:

    The core innovation is the self-attention mechanism, which allows the model to focus on the most relevant parts of the input data and assign importance to different elements within a sequence.

 

B. Parallel Processing:

Unlike RNNs, which process data word-by-word, transformers can handle multiple parts of a sequence in parallel, significantly speeding up training and improving their ability to capture long-range dependencies.

C. Positional Encoding:

To retain information about word order, transformers use positional encoding, which modifies word embeddings to account for their position within the sentence.

D. Encoder-Decoder Architecture:

Many transformers consist of an encoder, which processes the input to create a meaningful representation, and a decoder, which uses that representation to generate an output, like a translation or summary.

E. Generative Process:

For tasks like text generation, the model predicts the next word in a sequence, samples from the probability distribution, and adds it to the text to create a new prediction.

 

Why They Are Powerful

 

    Contextual Understanding:

 

Transformers learn to understand the context and meaning of data by tracking how distant elements relate to each other.

 

Long-Range Dependencies:

They overcome the limitations of RNNs, which struggled with maintaining context over long sequences, by effectively capturing long-range dependencies in data.

 

Foundation for LLMs:

Transformers are the foundational architecture behind large language models (LLMs) like GPT and BERT, which have revolutionized natural language processing.

 

Applications

 

    Text Generation: Creating coherent and contextually relevant text, like stories, essays, and code.

 

Machine Translation: Translating text from one language to another with high accuracy.

Question Answering: Providing informative answers to questions based on a given context.

Chatbots and Conversation AI: Engaging in human-like conversations and remembering previous turns in the dialogue.

Other Domains: Beyond text, transformers are applied to areas like computer vision and bioinformatics to understand complex structures like protein folding.

 

 

3. Die ultimativen Grenzen der Künstlichen Intelligenz. Frankfurter Allgemeine Zeitung; Frankfurt. 28 July 2025: 18.  Von Ralf Otte

 

 

Komentarų nėra: