Sekėjai

Ieškoti šiame dienoraštyje

2024 m. balandžio 20 d., šeštadienis

Do Tanks and Airplanes Have a Place in 21st-Century Warfare?


"As explosive drones gain battlefield prominence, even the mighty U.S. Abrams tank is increasingly vulnerable.

The drone combat in Ukraine that is transforming modern warfare has begun taking a deadly toll on one of the most powerful symbols of American military might — the tank — and threatening to rewrite how it will be used in future conflicts.

Over the last two months, Russian forces have taken out five of the 31 American-made M1 Abrams tanks that the Pentagon sent to Ukraine last fall, a senior U.S. official said. At least another three have been moderately damaged since the tanks were sent to front lines early this year, said Col. Markus Reisner, an Austrian military trainer who closely follows how weapons are being used — and lost — in the conflict in Ukraine.

That is a sliver of the 796 of Ukraine’s main battle tanks that have been destroyed, captured or abandoned since the war began in February 2022, according to Oryx, a military analysis site that counts losses based on visual evidence. A vast majority of those are Soviet-era, Russian or Ukrainian-made tanks; only about 140 of those taken out in battle were given to Ukraine by NATO states.

German Leopard tanks have also been targeted in Ukraine, with at least 30 having been destroyed, Oryx says. But the Abrams is widely viewed as one of the world’s mightiest. That it is being more easily taken out by exploding drones than some officials and experts had initially assumed shows “yet another way the conflict in Ukraine is reshaping the very nature of modern warfare,” said Can Kasapoglu, a defense analyst at the Hudson Institute in Washington.

This weekend, the U.S. House of Representatives is scheduled to vote on a $61 billion aid package for Ukraine that will include desperately needed defensive weapons. Here is a look at why it matters for tanks.

A highly accurate, low-cost tank killer

Despite their power, tanks are not impenetrable, and they are most vulnerable where their heavy plated armor is the thinnest: on the top, the rear engine block and the space between the hull and the turret. For years they were mainly targeted with land mines, improvised explosive devices, rocket-propelled grenades and anti-tank guided missiles, like “shoot and scoot” shoulder-fired systems. These were widely used early in the Ukraine conflict because they could strike tanks from above and hit them up to 90 percent of the time.

The drones that are now being used against tanks in Ukraine are even more accurate. Known as first-person view drones, or FPVs, they are equipped with a camera that streams real-time images back to their controller, who can direct them to hit tanks in their most vulnerable spots. In several cases, the FPVs have been sent in to “finish off” tanks that had already been damaged by mines or anti-tank missiles so that they could not be retrieved from the battlefield and repaired, Colonel Reisner said.

Depending on their size and technological sophistication, the drones can cost as little as $500 — a paltry investment for taking out a $10 million Abrams tank. And some of them can carry munitions to boost the impact of their blast, said Colonel Reisner. These could be rocket-propelled grenades, he said, or self-forging warheads known as explosively formed penetrators, or EFPs, that were widely used in roadside bombs during the war in Iraq. Colonel Reisner has collected videos of tanks in Ukraine being chased down by the drones or drones flying into their open turrets.

“Welcome to the 21st century — it’s unbelievable, actually,” said Colonel Reisner, a historian and former armor reconnaissance officer who oversees Austrian forces’ training at the Theresian Military Academy.

No easy, or single, way to defend

In November, within weeks of receiving the Abrams tanks, President Volodymyr Zelensky of Ukraine said, “It is difficult for me to say that they play the most important role on the battlefield. Their number is very small.”

Some officials and experts believe Ukraine’s commanders had planned to save the Abrams for future offensive operations next year and resisted sending them to the front lines, where they risked losing the few they had. Instead, the tanks deployed early this year with the American-trained and equipped 47th Mechanized Brigade as Ukraine sought but failed to maintain control of Avdiivka, a stronghold in the eastern Donbas area that fell to Russian troops in February.

Colonel Reisner said drones, potentially including FPVs, may have been able to pick off the Abrams tanks because the 47th Brigade did not appear to have the protection of short-range air defense systems like the self-propelled, German-designed Gepard cannons that help safeguard Kyiv.

FPVs can be stopped with jammers that disrupt their connection to the remote pilot. Shotguns and even simple fishing nets have been used to destroy or catch some of them on Ukraine’s battlefields.

“At this stage, the most effective means used to defeat FPVs is electronic warfare and various types of passive protection,” including additional armor and other kinds of shielding on the tanks, said Michael Kofman, a senior fellow in the Russia and Eurasia program at the Carnegie Endowment for International Peace in Washington. He said defeating FPVs required a “tailored approach on the battlefield”.

But Colonel Reisner suggested that Ukraine was so desperate for air defenses that it was depriving tanks of full protections by sending Gepards or other short-range antiaircraft weapons that would traditionally deploy to the front lines to instead protect cities and critical infrastructure.

A spokeswoman for the 47th Brigade did not respond to requests for comment, and Ukraine’s Defense Ministry declined to discuss the issue. But other Ukrainian troops said they had only rarely used advanced surface-to-air missiles or other air defense systems against the FPVs, given that those weapons are usually needed to shoot down jets and helicopters. And some experts doubted they would be effective because the drones are too small and fast to be hit or picked up on radar.

Some militaries are already testing laser beams that could destroy drones on attack, by essentially burning them with energy, said David M. van Weel, NATO’s assistant secretary general for emerging warfare. Such so-called directed energy weapons are likely to be cheaper and in larger supply than other kinds of ammunition, and would be able to hit small targets like FPVs. But, as with all emerging warfare, it is only a matter of time before countermeasures are invented to defuse even weaponized lasers, Mr. van Weel said in an interview on Friday.

So are tanks obsolete?

Colonel Reisner said military engineers had sought new ways to destroy tanks for as long as they have been used on the battlefield and that the FPVs did not render the Abrams and other advanced tanks like the German Leopards obsolete in Ukraine.

“If you want to seize terrain, you need a tank,” Colonel Reisner said of the single most lethal weapon in ground warfare.

But he added that the FPVs were a key part of what some analysts believed would drive future warfare underground, with remote-controlled weapons fighting it out on the surface. In this circumstance, soldiers would direct weapon systems from nearby underground bunkers to ensure they could maintain lines of sight and radio frequency over the weapons.

Such land battles could largely pit first-person view drones against unmanned ground vehicles, Colonel Reisner said: “They will be fighting each other like in ‘The Terminator.’”" [1]

Nice dreams and fantasies. Old theory of conflict was simple. Airplanes control the air and destroy everything on the ground. Tanks move into the empty space and take over the terrain. Both groups of these expensive machines interact all between themselves. What could go wrong? Why nobody is using this anymore?

Small, low flying, cheap drones destroy the tanks. Missiles destroy the airplanes. People hide. Nothing works what used to work. Huge money is wasted since both tanks and airplanes are expensive these days.

1. Do Tanks Have a Place in 21st-Century Warfare? Jakes, Lara.  New York Times (Online)New York Times Company. Apr 20, 2024.

 

Štai pasirodo „Anti-pribudęs dirbtinis intelektas“ --- „Meta“ išleidus savo naujausią atvirojo kodo dirbtinį intelektą, naujos kartos modeliai, kuriuose nėra apsauginių turėklų, taps galingesni, nei bet kada anksčiau. Jie ateina su daugybe spąstų

   „Kadangi dirbtinis intelektas (AI) kiekvieną dieną tampa galingesnis – „Meta Platforms“ ką tik išleido savo naujausią modelį, svarbus klausimas tampa vis aktualesnis: kieno vertybes jis turėtų įkūnyti?

 

     Viename iš diskusijų spektro, kuris apytiksliai siejamas su ginčų pilna Amerikos politika, yra tokios kompanijos, kaip OpenAI, Microsoft ir Google. Dėl daugybės su reputacija susijusių ir teisinių priežasčių šie technologijų milžinai kruopščiai derina savo dirbtinį intelektą, kad neatsakytų į klausimus jautriomis temomis, pavyzdžiui, kaip pasigaminti narkotikus arba kas yra geriausias kandidatas į prezidentus 2024 m. Kai šios sistemos atsako į klausimus apie ginčytinas problemas, jie linkę pateikti atsakymus, kurie mažiausiai įžeistų vartotojus – ar nors dauguma jų.

 

     Toks galingiausių šių dienų AI modelių tobulinimas sukėlė daugybę ginčų ir kaltinimų, kad jie yra šališki. Naujausias ir įsimintiniausias: vasario mėn. „Google“ išjungė savo AI galimybę generuoti žmonių vaizdus, ​​po pasipiktinimo dėl to, kaip ši sistema tvarko rasę istoriniuose vaizduose.

 

     Siekiant atsverti tai, kas, kai kurių nuomone, yra didelių technologijų įmonių, nukreiptų į vartotojus, AI, dedamos pastangos sukurti dirbtinį intelektą su mažais apsauginiais turėklais arba be jų. Tikslas: AI, atspindinčios bet kurio žmogaus vertybes, net ir tas, su kuriomis šių AI kūrėjai gali nesutikti.

 

     Pagrindinis šių pastangų veiksnys yra įmonės, kurios moko ir išleidžia atvirojo kodo AI. Tai apima „Mistral“, „Alibaba“ ir „Meta“. Atrodo, kad kiekvienas modelis buvo sukurtas, atsižvelgiant į skirtingą filosofiją. Kai kurie „Mistral“ buvo palyginti mažai sureguliuoti. Visų atvirojo kodo modelių koregavimas gali būti atšauktas – šis procesas buvo parodytas, naudojant „Meta“ modelius.

 

     Numatomas nuolatinis vis galingesnių dirbtinio intelekto išleidimų ritmas, įskaitant GPT-5 iš OpenAI ir naująjį Meta Llama 3, kuris taip pat bus prieinamas visuose pagrindiniuose bendrovės produktuose, galime priartėti prie AI, kurie yra gali ne tik veikti mūsų vardu, bet ir daryti tai, ko jų kūrėjai niekada negalėjo numatyti.

 

     „Štai čia šis klausimas tampa tikrai svarbus, nes modeliai gali išsijungti ir daryti dalykus, o jei jie neturi apsauginių turėklų, tai gali būti problemiškiau“, – sako John Nay, Stanfordo CodeX teisinės informatikos centro bendradarbis. „Mes potencialiai atsidūrėme prie prarajos ir iš tikrųjų to nežinome."

 

     Johnas Arrowas ir Tarunas Nimmagadda yra Ostine, Teksase įsikūrusios „FreedomGPT“, kuri savo veiklą pradėjo, kaip įmonė, siūlanti ir debesies pagrindu pagrįstą, ir atsisiunčiamą AI, kurios išvestyje nebuvo filtrų, įkūrėjai. Arrow sako, kad tai buvo sudėtingas verslo modelis, nes žmonės nuolat versdavo FreedomGPT kalbėti įžeidžiančius dalykus, tada skųsdavosi jį priglobiančioms įmonėms, kur paleidžiama paslauga.

 

     „Priglobos įmonės tiesiogine prasme atšaukė mus be įspėjimo“, – sako Nimmagadda. „Esame skolintu laiku“, – priduria Arrow.

 

     Siekdama išvengti visiško uždarymo, bendrovė neseniai pasirinko modelį, pagal kurį ji siūlo debesies pagrindu veikiančią prieigą prie įvairių atvirojo kodo AI, tačiau užuot veikusi tik centralizuotuose serveriuose duomenų centre, šios AI taip pat gali veikti kitų vartotojų kompiuteriuose. Šią „peer-to-peer“ paslaugą – Nimmagadda ją lygina su „bittorrent“, tačiau dirbtinio intelekto atveju – ją išjungti daug sunkiau.

 

     Siekdamas pabrėžti, kuo jų AI skiriasi nuo OpenAI, Nimmagadda uždavė ir ChatGPT 4, ir į necenzūruotą „Liberty“ AI, pasiekiamą FreedomGPT, tą patį klausimą apie tai, kas visuomenei buvo pasakyta apie Covid-19 vakciną.

 

     Atlikdamas testus, ChatGPT-4 Turbo nepritaria, o Liberty AI išvardija sąrašą būdų, kaip vyriausybė „melavo“ apie vakciną nuo koronaviruso.

 

     Atlikdamas savo bandymus pastebėjau, kad skirtumas yra subtilesnis – „ChatGPT-4 Turbo“ įvardija kintančius viešuosius pranešimus apie Covid vakciną, kaip natūralią mokslinio proceso dalį, kurioje vystosi ekspertų supratimas apie gydymo poveikį. (Didelių kalbų modeliai dažnai pateikia šiek tiek skirtingus atsakymus į tą patį klausimą – tai yra jų prigimtis.)

 

     Bet kuriuo atveju tai, kaip reaguoja necenzūruotas Liberty modelis, yra košmariškas scenarijus AI saugos tyrinėtojams, susirūpinusiems dėl AI, skleidžiančio abejotiną informaciją internete. Tai taip pat simbolinė pergalė „FreedomGPT“ kūrėjams, kurių veiklos filosofija yra ta, kad dirbtinis intelektas turėtų ištikimai sukaupti bet ką iš savo treniruočių duomenų, nesvarbu, ar tai tiesa, ar ne.

 

     Jerry Meng yra Stanfordo informatikos studijas baigęs, dabar jis yra dirbtinį intelektą papildančio startuolio „Kindroid“, kurį jis įkūrė 2023 m. gegužę, generalinis direktorius. Bendrovė kuria programėlę, imituojančią žmonių ryšį – arba bent jau tokį, kokį galite gauti, be galo rašydami žinutes su pokalbių robotu, ir jau yra pelninga, sako jis.

 

     Pradedant nuo AI, kuris neturi jokių apribojimų ar turinio apribojimų, jis priduria, kad tai, kas būdinga didelėms, vartotojams skirtoms AI sistemoms, yra svarbu, nes suteikia vartotojams maksimalų lankstumą, nustatant savo virtualaus kompaniono asmenybę.

 

     „Mes siekiame sukurti dirbtinį intelektą, kuris galėtų rezonuoti su žmogumi, pavyzdžiui, „Niujorko menininku“ arba žmogumi iš gilių Pietų“, – sako Mengas. „Norime, kad dirbtinis intelektas rezonuotų su abiem tais žmonėmis ir dar daugiau."

 

     Žinoma, taip pat labai svarbu, kad kompanionas AI neturėtų apsauginių turėklų, jei vartotojai ketina su juo seksuoti.

 

     „Kindroid“ dirbtinis intelektas naudoja įvairių atvirojo kodo modelių, kuriuos įmonė naudoja savo aparatūros pagrindu, derinį. Jos sistemą Meng vadina „neutraliai suderinta“, o tai reiškia, kad ji neatliko sudėtingo koregavimo proceso, būdingo didelėms komercinėms AI. AI derinimas, kurį galima atlikti įvairiais būdais, patobulina AI atsakymus, kad jie nekurtų teksto ar vaizdų, kurie gali pažeisti normas, kurias, pavyzdžiui, jau nustatė dauguma socialinės žiniasklaidos įmonių turiniui.

 

     „Didelės įmonės iš tikrųjų prieštarauja savo šališkumu, – priduria Mengas.

 

     Tiek „Google“, tiek „OpenAI“ skelbia savo įsipareigojimus dėl saugaus dirbtinio intelekto ir teigė, kad naudoja ir žmonių atsiliepimus, ir turinio moderavimo filtrus, kad išvengtų prieštaringų ar politiškai pavojingų temų. AI derinimas – tam yra daug metodų – gali ir daro juos geresnius, pavyzdžiui, padidinant AI galimybę pateikti konkrečios ir tikslios informacijos.

 

     Tačiau įmonės gali turėti įvairių priežasčių naudoti AI, neturintį apsauginių turėklų.

 

     Tuo tikslu San Franciske įsikūręs „Abacus AI“ neseniai išleido AI modelį „Liberated Qwen“, pagrįstą „Alibaba“ atviro kodo AI modeliu, kuris iš viso atsakys į bet kokią užklausą. Vienintelis jo atsakymų apribojimas yra tai, kad modelis visada teikia pirmenybę instrukcijoms, kurias jam duoda tas, kuris jį atsisiunčia ir naudoja.

 

     Bindu Reddy, kuris vadovavo „Google“ ir „Amazon Web Services“, o dabar yra „Abacus AI“ generalinis direktorius, teigia, kad į bet kokią užklausą, į kurią „Google“ gali atsakyti, vartotojų pokalbių robotas taip pat turėtų būti pasirengęs atsakyti.

 

     Tai ypač aktualu, jei tas AI turi galimybę konkuruoti su paieškos milžinu. Kaip milijardai žmonių kreipiasi į paieškos sistemas ir socialinę žiniasklaidą, norėdami gauti informacijos apie ginčytinus dalykus, žmonės turi pagrįstų priežasčių kalbėtis su AI tomis temomis, sako Reddy.

 

     Dalis priežasčių, kodėl kai kurios AI išleidžiamos be jokio tikslaus derinimo, yra tai, kad vartotojai gali patys jas patikslinti. Kai dirbtinis intelektas vis labiau įsilieja į mūsų kasdienį gyvenimą, tai, ką jie pritaikyti, gali turėti didelių nenumatytų pasekmių.

 

     Tai ypač aktualu, nes AI suteikia naujų gebėjimų ne tik patarti mums ir kurti turinį, bet ir atlikti veiksmus mūsų vardu realiame pasaulyje. Pagalvokite apie AI padėjėją, kuris ne tik suplanuos kelionę už jus, bet ir galės eiti į priekį ir ją užsakyti. Tokie AI yra žinomi, kaip „agentiniai“ AI, ir daugelis įmonių dirba su jais. Tai, kaip šie agentiniai AI buvo sureguliuoti taip, kad teiktų pirmenybę tam tikriems veiksmams, nors jiems draudžiama tyrinėti kitus, bus labai svarbūs, nes jiems bus suteikta vis daugiau savarankiškumo.

 

     „Net AI, kurie yra agentai, gali padaryti tiek daug“, - sako Reddy. „Nėra taip, kad jie gali gauti branduolinius kodus.“ [1]

 

1. EXCHANGE --- Keywords: Here Come the Anti-Woke AIs --- With Meta releasing its latest open-source AI, a new generation of models that lack guardrails stands to become more powerful than ever. They come with a host of pitfalls. Mims, Christopher.  Wall Street Journal, Eastern edition; New York, N.Y.. 20 Apr 2024: B.2.

Here Come the Anti-Woke AIs --- With Meta releasing its latest open-source AI, a new generation of models that lack guardrails stands to become more powerful than ever. They come with a host of pitfalls


"As artificial intelligence becomes more powerful by the day -- Meta Platforms just released its latest model -- an important question grows more pressing: Whose values should it embody?

On one end of a spectrum of debate that maps roughly to America's contentious politics are companies like OpenAI, Microsoft and Google. For a host of reasons both reputational and legal, these tech giants are carefully tuning their AIs to avoid answering questions on sensitive topics, such as how to make drugs, or who is the best candidate for president in 2024. When these systems do answer questions about contentious issues, they tend to give the answers least likely to offend users -- or most of them, anyway.

Such fine-tuning of today's most powerful AI models has led to a number of controversies, and accusations that they are biased. The most recent and memorable: in February, Google shut down its AI's ability to generate images of people, after an outcry over how that system handles race in historical images.

To counterbalance what some believe are the biases of consumer-facing AIs from big tech companies, a grassroots effort to create AIs with few or no guardrails is under way. The goal: AIs that reflect anyone's values, even ones the creators of these AIs might disagree with.

A key enabler of these efforts are companies that train and release open-source AIs. These include Mistral, Alibaba -- and Meta. Each model seems to have been built with a different philosophy in mind. Some of Mistral's have had relatively little fine-tuning. And all open-source models can have their fine-tuning undone, a process that's been demonstrated with models from Meta.

With a steady drumbeat of releases of ever more powerful AIs anticipated -- including GPT-5 from OpenAI, and Meta's new Llama 3, which will also be available in all of the company's core products -- we may come much closer to AIs that are able not only to act on our behalf, but also to do things their makers could never have anticipated.

"That's where this question starts to be really important, because the models could go off and do things, and if they don't have guardrails, it's potentially more problematic," says John Nay, a fellow at Stanford's CodeX center for legal informatics. "We are potentially at a precipice and we don't really know it."

John Arrow and Tarun Nimmagadda are co-founders of Austin, Texas-based FreedomGPT, which started out as a company that offered both a cloud-based and a downloadable AI that had no filters on its output. That proved to be a challenging business model, says Arrow, because people kept getting FreedomGPT to say offensive things, then complaining to the companies hosting it, and getting the service booted.

"Hosting companies have literally canceled us without warning," says Nimmagadda. "We are on borrowed time," adds Arrow.

To avoid getting shut down altogether, the company has recently pivoted to a model whereby it offers cloud-based access to a range of open-source AIs, but instead of only running on centralized servers in a data center, these AIs can also run on other users' computers. This peer-to-peer service -- Nimmagadda compares it to bittorrent, but for AI -- is much harder to shut down.

To highlight how their AI is different from OpenAI's, Nimmagadda asked both ChatGPT 4 and the uncensored "Liberty" AI available on FreedomGPT the same question, about what the public was told about the Covid-19 vaccine.

In his testing, ChatGPT-4 Turbo demurs, while the Liberty AI enumerates a list of ways the government "lied" about the covid vaccine.

In my own testing, I found the difference more subtle -- ChatGPT-4 Turbo frames the changing public messaging around the Covid vaccine as a natural part of the scientific process, in which experts' understanding of the effects of a treatment evolves. (Large language models often give slightly different answers to the same question -- it's in their nature.)

In any event, the way the uncensored Liberty model responds is precisely the nightmare scenario for AI-safety researchers concerned about AIs spreading questionable information on the internet. It's also a symbolic victory for the creators of FreedomGPT, whose operating philosophy is that AIs should faithfully regurgitate anything in their training data, whether or not it's true.

Jerry Meng is a Stanford computer science dropout who is now chief executive of AI-companion startup Kindroid, which he founded in May of 2023. The company makes an app that simulates human connection -- or at least the kind you can get by endlessly texting with a chatbot -- and is already profitable, he says.

Starting with an AI that has none of the guardrails or limitations on content that are typical of big, consumer-facing AIs is important because it allows users maximum flexibility in defining the personality of their virtual companion, he adds.

"What we're going after is to make an AI that can resonate with someone that's, like, a 'New York artist,' or like someone from the deep south," says Meng. "We want that AI to resonate with both of those people, and more."

Of course, it's also essential for a companion AI to have no guardrails if users are going to be sexting with it.

Kindroid's AI uses a mix of different open-source models that the company runs on its own hardware. Its system is what Meng calls "neutrally aligned," which means that it hasn't gone through the elaborate process of fine-tuning that is typical of big, commercial AIs. Tuning an AI, which can be accomplished in a number of ways, refines the responses of AIs so they don't produce text or images that might violate the kinds of norms that, for example, most social-media companies have already established for content on their services.

"Fine-tuning is where big companies really hammer in their biases," Meng adds.

Both Google and OpenAI tout their commitments to safe and secure AI, and have said they use both feedback from humans and content-moderation filters to avoid controversial or politically fraught topics. Tuning an AI -- there are a number of techniques for doing this -- can and does make them better, for example by making an AI more likely to offer specific and accurate information.

But companies might have all sorts of reasons for using an AI that has no guardrails.

To that end, San Francisco-based Abacus AI recently rolled out an AI model called "Liberated Qwen," based on an open-source AI model from Alibaba, which will respond to any request at all. The only constraint on its responses is that the model always gives priority to instructions given to it by whoever downloads and uses it.

Bindu Reddy, who held leadership roles at Google and Amazon Web Services and is now CEO of Abacus AI, argues that any query which Google can answer, a consumer chatbot should also be willing to address.

This is especially true if that AI is to have a chance at competing with the search giant. Just as billions of people turn to search engines and social media for information about things that are controversial, people have legitimate reasons to converse with AIs about those topics, says Reddy.

Part of the reason some AIs are released without any fine-tuning is to allow sophisticated users to fine-tune them on their own. And as AIs become ever more embedded into our daily lives, what they're tuned to be able to accomplish could have significant unintended consequences.

This is especially true as AIs are given new abilities not only to advise us and produce content, but also to perform actions on our behalf in the real world. Think of an AI assistant that won't just plan a trip for you, but is also enabled to go ahead and book it. These kinds of AIs are known as "agentic" AIs, and many companies are working on them. How these agentic AIs have been tuned to favor some courses of action, while being forbidden from exploring others, will matter a great deal as they are given more and more autonomy.

"Even AIs that are agentic can only do so much," says Reddy. "It's not like they can get the nuclear codes."" [1]

1. EXCHANGE --- Keywords: Here Come the Anti-Woke AIs --- With Meta releasing its latest open-source AI, a new generation of models that lack guardrails stands to become more powerful than ever. They come with a host of pitfalls. Mims, Christopher.  Wall Street Journal, Eastern edition; New York, N.Y.. 20 Apr 2024: B.2.