"The goals of politically motivated cyber operations are sabotage, espionage and influence. This is also clearly evident after the attack on Israel. What that means - and what helps.
The increase in cybercrime and government cyber operations is a seemingly unavoidable side effect of digitalization. When you hear the term “cyber attack,” many people probably think of phishing messages via email or SMS that try to entice people to click on a link without thinking. Or ransomware that encrypts and steals data. Or so-called denial-of-service attacks that bring websites down. Many people also remember that they should urgently back up the data on their own laptop and choose better passwords. Political influence through deliberately released information, both true and fictitious, rarely comes to mind, although alongside espionage and sabotage this is one of the central goals of politically motivated actors, from activists to secret services to terrorist groups.
The information released is usually disinformation - i.e. fictitious or manipulated messages and images - or stolen personal or confidential information. What impact does this type of influence have on people’s opinions?
Studies show that users primarily believe news that confirms their own opinions and prejudices. However, the more widespread information is, the more credible it appears. If all the bloggers a user follows claim something, it must be true and important. Not only people, but also the ranking algorithms of social networks follow this logic. How well cyberspace can be used for unfair influence was clearly visible in the 2016 American presidential election campaign. The hackers captured and published confidential emails from candidate Hilary Clinton, and cyber trolls flooded social networks with disinformation.
More disinformation thanks to digitalization and AI progress
The impact of digitalization on disinformation can currently be seen in the flood of false news about Israel's fight against the terrorist organization Hamas. But disinformation has been around for a long time. Historical examples show how immense the damage can be when disinformation creates sentiment against minorities and people of other faiths, and how long-lasting the impact of disinformation can be. This is particularly true of anti-Semitism. The ritual murder legend cost the lives of thousands of Jews in the Middle Ages; Only slightly changed, it can be found today in Telegram messages and blog entries from followers of the QAnon conspiracy theory.
The legend of the Jewish world conspiracy comes from the “Protocols of the Elders of Zion,” a pamphlet that first appeared in Russia in 1903. It not only shaped the ideology of the Nazis, who murdered six million Jews.
According to a study by the Konrad Adenauer Foundation, 15 percent of Germans still believe that rich Jews are the real rulers of the world, partly or completely true.
As in the Middle Ages, there are still real people behind the current disinformation. But digitalization is revolutionizing the production and distribution of disinformation. And so, of course, are the possibilities of using cyber attacks to obtain personal and confidential data that can be used to exert influence.
The network operators are deceiving
Today, disinformation is primarily distributed via social media. Platforms like Telegram, Tiktok, X (formerly Twitter), Facebook, Instagram and YouTube have an immense influence on billions of users around the world. Many people primarily get their information from these media, and journalists are also increasingly drawing from these sources.
The functioning mechanism of the platforms is based on the amplification of those contents that experience increased user interaction. This content is often designed to appeal to instincts and elicit strong emotional responses. In this digital environment, conspiracy theories and disinformation easily gain popularity and then spread much faster than scientific opinions and real news.
In order for a message to be heard on social media, you need a network of followers who will like and spread the message. Many of the followers who like disinformation are actually people who act partly out of conviction and partly because they are paid to do so. However, artificial intelligence (AI) is increasingly being used to generate large numbers of artificial followers very quickly. To ensure that these “bots” are not recognized as such by network operators and deleted, they use AI to simulate the behavior of real users. Anyone who knows ChatGPT can easy to imagine how this can be done - although today much simpler approaches are sufficient to deceive network operators.
AI can detect disinformation
Most disinformation today - news, posts, blogs and comments - is still produced manually. However, the importance of AI is increasing. AI tools like ChatGPT can vary texts quickly and in large numbers and thus adapt them specifically to individual groups or people, analogous to the methods used in the advertising industry.
The more variations of a message appear, the more credible it appears. And the higher it is rated on social media. Images, sound and video recordings can also be artificially generated using freely accessible AI tools. However, such “deep fakes” do not currently play a major role. Instead, recordings are used that were taken in a completely different context. Or recordings are retouched by hand. Pictures posed with actors are surprisingly common.
While AI still plays a relatively minor role in the generation of disinformation today, it is now quite often used as an excuse to deny the truth. For example, many Hamas supporters deny the atrocities of the October 7th pogrom - despite the abundance of evidence from a wide range of sources - by declaring the recordings of the acts and victims to be "deep fakes".
Media forum safe tools
However, digitalization also helps to identify disinformation. Operators use AI to distinguish bots from real people. AI-generated images are often visible to the naked eye because small details such as fingers or the position of arms and legs are often incorrect. But even well-made forgeries can usually still be exposed thanks to media forum-safe tools: There are tools that roughly evaluate a text by automatically searching for agreeing and refuting texts on the Internet. Others extract as much as possible from a text about its author and his mood. There are also tools that search for whether a text or image has been taken out of context, for example whether it has appeared identically or with slight variations before in a different news context. The corresponding tools are still quite complex and error-prone and are aimed primarily at journalists, operators of news sites and social media, as well as fact checkers, i.e. organizations that investigate and assess the truthfulness of popular statements from social media through research.
AI can also be used to monitor what information and disinformation can be found on social and traditional media. Tools of this kind have long been used by companies to find out what the public is saying and thinking about them. Such tools can help to take targeted action against disinformation.
Well-crafted disinformation always contains a grain of truth. The disinformation arises by crafting a story around this spark that cannot be immediately refuted, or by placing this truth in a different context. The tactic of wrapping disinformation in half-truths is supported in social media by the fact that news there is usually, and often has to be, very short and compact. If a message only consists of a picture and a few sentences, not everyone will notice if the context is missing, if important aspects are missing, if half-truths are spread.
However, disinformation is by no means only found on social media; established media also do this, knowingly or unknowingly. There are a surprising number of examples in current reporting on the war in the Middle East: On November 4th, the Israeli army confirmed an attack on an ambulance that Hamas had misused to transport terrorists and weapons. In many news reports, the explanatory sentence was omitted - a piece of information became disinformation, a permitted attack on terrorists became a crime against the civilian population. The same thing happens when the civilian casualties of Israeli attacks are reported without saying that they occur because Hamas uses the civilian population as a shield; Under international law, the person who uses them as a protective shield is responsible for the deaths of civilians.
These examples also show that the transition between disinformation and prejudiced or dubious reporting is fluid. This was particularly clear in the reporting on the alleged destruction of a hospital in Gaza on October 17th. Instead of an Israeli bomb, as was initially reported, a Palestinian rocket was actually the cause, and it wasn't a hospital that was hit, but a parking lot next to it. Unfortunately, many journalists initially accepted Hamas' claims. Afterwards, Hamas and Israel were sometimes put on the same level, suggesting that terrorists were just as credible as representatives of a democratic constitutional state. This type of reporting is contributing to the rise of anti-Semitism across Europe.
Disinformation and Cognitive Warfare
The influence of digital disinformation and discrediting information is just one aspect of a much larger development that is now often summarized by the term cognitive warfare. Cognitive warfare attempts to influence the opinions, feelings, thinking and behavior of the other side and its allies in one's own interests, or conversely, to ward off this influence. Instead of infiltrating as many spies as possible into key positions in the state and society, as in the Cold War, where they can then collect information and influence the politics and opinions of the opponent, the focus today is on digital possibilities. Foreign systems are hacked, social and traditional media are specifically supplied with appropriate information and disinformation. Expectations for this cognitive warfare are high. Some even believe that they will replace kinetic wars.
It is suspected that many cyberattacks that resulted in the capture of large amounts of confidential information were also used to prepare for influence operations. For example, in 2015, Chinese hackers penetrated the IT systems of the American government's Office of Personnel Management (OPM) and stole approximately 22.1 million files. Many of these contained personal information of government employees and others who had undergone security clearances. It doesn't take much imagination to imagine how such information could be used for cognitive warfare.
A conclusion: Disinformation is an old phenomenon. There are many examples of how disinformation destabilizes societies and leads to racism and violence against those who think differently. The anti-Semitism in Germany, which is currently growing to an unimaginable extent, is also being fueled by disinformation about the war between Israel and the terrorist organization Hamas. It is therefore essential that everyone understands how disinformation works, how to recognize it and avoid it. Media consumers must learn to question news and watch for signs of disinformation. A message does not become true because a user wants to believe it.
Haya Schulmann is a professor of cybersecurity at Goethe University and a member of the board of directors of the National Research Center for Applied Cybersecurity ATHENE.
Michael Waidner is professor of IT security at TU Darmstadt and CEO of ATHENE." [1]
1. Von Desinformation zur kognitiven Kriegsführung. Frankfurter Allgemeine Zeitung (online) Frankfurter Allgemeine Zeitung GmbH. Nov 13, 2023. Von Haya Schulmann und Michael Waidner
Komentarų nėra:
Rašyti komentarą