Sekėjai

Ieškoti šiame dienoraštyje

2024 m. liepos 4 d., ketvirtadienis

A New America Is Born: Why a New Conservative Brain Trust Is Resettling Across America


"The Claremont Institute has been located in Southern California since its founding in the late 1970s. From its perch in the foothills of the San Gabriel Mountains, it has become a leading intellectual center of the pro-Trump right.

Without fanfare, however, some of Claremont’s key figures have been leaving California to find ideologically friendlier climes. Ryan P. Williams, the think tank’s president, moved to a suburb in the Dallas-Fort Worth area in early April.

His friend and Claremont colleague Michael Anton — a California native who played a major role in 2016 to convince conservative intellectuals to vote for Mr. Trump — moved to the Dallas area two years ago. The institute’s vice president for operations and administration has moved there, too. Others are following. Mr. Williams opened a small office in another Dallas-Fort Worth suburb in May, and said he expects to shrink Claremont’s California headquarters.

“A lot of us share a sense that Christendom is unraveling,” said Skyler Kressin, 38, who is friendly with the Claremont leaders and shares many of their concerns. He left Southern California in to move to Coeur d’Alene, Idaho, in 2020. “We need to be engaged, we need to be building.”

As Mr. Trump barrels through his third presidential campaign, his supporters buoyed by last week’s debate, many of the young activists and thinkers who have risen under his influence see themselves as part of a project that goes far beyond electoral politics. Rather, it is a movement to reclaim the values of Western civilization as they see it. Their ambitions paint a picture of the country they want should Mr. Trump return to the White House — one driven by their version of Christian values, with larger families and fewer immigrants. They foresee an aesthetic landscape to match, with more classical architecture and a revived conservative art movement and men wearing traditional suits.

Their vision includes stronger local leadership and a withered national “administrative state,” prompting them to celebrate last week when the Supreme Court effectively ended the “Chevron deference,” which could lead to the weakening of thousands of federal rules on the environment, worker protection and beyond.

Fed up by what they see as an increasingly hostile and disordered secular culture, many are moving to what they view as more welcoming states and regions, battling for American society from conservative “fortresses.”

Some see themselves as participants in and advocates for a “great sort,” a societal reordering in which conservatives and liberals naturally divide into more homogenous communities and areas. (And some, including Mr. Kressin, are simultaneously chasing the cheaper costs of living and safer neighborhoods that fuel many ordinary moves.)

The year Mr. Kressin moved to Idaho, he and Mr. Williams were part of an informal conversation at Claremont about the need for new institutions in what some hope will be a rejuvenated American society. The idea was a “fraternal community,” as one leader put it, that prioritized in-person meetings. The result was the all-male Society for American Civic Renewal, an invitation-only social organization reserved for Christians. The group has about 10 lodges in various states of development so far, with membership ranging between seven and several dozen people.

The group’s goals, according to leaders, include identifying “local elites” across the country and cultivating “potential appointees and hires for an aligned future regime” — by which they mean a second Trump presidency, but also a future they describe in sweeping and sometimes apocalyptic terms. Some warn of a coming societal breakdown that will require armed, right-minded citizens to restore order.

The group’s ties to Claremont gives it access to influence in a future Trump administration: Mr. Anton served on Mr. Trump’s National Security Council, and a Claremont board member, John Eastman, advised Mr. Trump’s 2020 election campaign. He faces criminal charges in Arizona and Georgia over schemes to keep Mr. Trump in power after he lost that race.

Their rhetoric can sound expansive to the point of opacity. “As the great men of the West bequeathed their deeds to us, so must we leave a legacy for our children,” the group’s website proclaims. “The works raised by our hands to this end will last long after we are buried.”

Their output, so far, looks more modest. Mr. Kressin’s home chapter has hosted an expert in menswear, who exhorted members to dress in a “classical American style,” and a screening and discussion of the 2003 naval adventure film “Master and Commander.” The men socialize outside of meetings and pass each other business.

The circle’s critics say they present a cleaned-up version of some of the darkest elements of the right, including a cultural homogeneity to the point of racism and an openness to using violence to achieve political ends.

“It’s this idea of organizing discontent at the local level and building a network that over the next decade or three decades or even half-century would just keep moving the Republican Party further and further rightward, and mobilizing voters in discontented parts of the country, a lot of them men,” said Damon Linker, a senior lecturer in political science at the University of Pennsylvania, who has written critically of the crowd. “It’s a highbrow version of the militia movement.”

In its first two years, leaders said, SACR received significant funding from Charles Haywood, a former business owner in Indiana. Mr. Haywood seems to delight in being an online provocateur. He has called the riot on Jan. 6, 2021, an “electoral justice protest” and praised the racist 1973 novel “The Camp of the Saints.”

Posting on the platform X last month, he wrote that foreign-born citizens should be deported for offenses including “working for Left causes.” Other leaders attribute the apocalyptic tone of the group’s founding documents to Mr. Haywood, who declined to comment.

Members of the society are young, mostly white-collar (and mostly white), and often wealthy. Some have left elite institutions to start their own firms and invest in conservative-leaning ventures.

Josh Abbotoy, the executive director of American Reformer, a Dallas-based journal that serves as an informal in-house publication for the movement, is moving to a small town outside Nashville this week with his wife and four children. Through his new professional network, he is raising funds to develop a corridor of conservative havens between Middle Tennessee and Western Kentucky, where he has also purchased hundreds of acres of property. He expects about 50 families to move to the Tennessee town — which he declined to identify — in the next year, including people who work from home for tech companies and other corporations.

Mr. Abbotoy is betting big on the revitalization of the rural South more broadly, as white-collar flexibility meets conservative disillusionment with liberal institutions and cities. He sees the Tennessee project as a “playbook” for future developments in which neighbors share conservative social values and enjoy, he suggested, a kind of ambient Christian culture.

“I personally would happily pay high H.O.A. fees to be in a neighborhood where I have to drive by an architecturally significant church every day, and I can hear church bells,” he said.

The Obergefell v. Hodges decision, which legalized same-sex marriage nationally, was a watershed moment for Mr. Abbotoy and other conservatives’ understanding of how quickly the ground could shift under their feet. It is a decision that signaled to them the onset of an era that the conservative Christian writer Aaron Renn — who has spoken at the fraternal society’s events — calls “negative world,” an influential concept that describes a culture in which “being known as a Christian is a social negative, particularly in the elite domains of ­society.”

Mr. Abbotoy was raised in an evangelical culture that encouraged conservative Christians to go out into “the world” and influence secular institutions, including corporations and universities. But that approach, which defined the last several generations of mainstream evangelicalism, feels increasingly untenable to people in his circle.

Mr. Abbotoy, who graduated from Harvard Law School, left a job with a major infrastructure company in 2021 and came to work for Nate Fischer, a Dallas venture capitalist and prolific networker whose firm invests in conservative projects and opposes “DEI/ESG and the bureaucratization of American business culture.” Mr. Fischer is the president of SACR’s Dallas chapter.

Andrew Beck, a brand consultant for conservative politicians and entities including SACR and Claremont, moved with his wife and their now six children, along with his parents and five of his siblings and their families, from Staten Island to suburbs north of Dallas in 2020. Almost 30 members of the family now live in the same area, just as they did in New York.

“Something is shifting that’s tectonic,” said Mr. Beck, who wrote a widely shared essay on “re-Christianizing America” for Claremont’s online magazine the American Mind. “It’s not so much about staking out some stronghold where you can live in a cocoon, it’s to be a part of a place you can truly consider to be home.”

Members must be male, belong to a “Trinitarian Christian” church, a broad category that includes Catholics and Protestants, but not members of the Church of Jesus Christ of Latter-day Saints. Members must also describe themselves as “unhyphenated Americans,” a reference to Theodore Roosevelt’s speech urging the full assimilation of immigrants.

The group’s interdenominational membership reflects the fact that in the Trump era, conservative Christianity is increasingly becoming a cultural and political identity, with theological differences falling to the wayside and Christianity serving as a kind of generic expression of rebellion against modernity. A significant minority of members are Catholic, including Mr. Kressin. The group also includes Presbyterians, Baptists and charismatics.

In Mr. Kressin’s new hometown in Idaho, the streets are clean and people leave their doors unlocked. His family lives in a house they can afford to own, with a white picket fence and room for a trampoline in the yard. In the cozy living room, an upright piano stands in the corner, and hymnals and classic novels line shelves on the wall.

“Many in our generation are very, very much longing for rootedness,” he said. “And they were raised in an era where that was really not valued very much.”

On a weekday morning this spring, he took a brisk morning stroll out his front door and up Tubb Hill, with wildflowers sprinkled along the path and soaring views of the crystalline lake below. At his house afterward, Lauren Kressin, who was pregnant with the couple’s eighth child, served peach tea in tastefully mismatched china, quietly switching cups with him so he would have the “less feminine” one, she said with a smile.

Starting over in Idaho, Mr. Kressin said later, was part of a project so long term that he does not expect to see its conclusion. “The old landed aristocracy in England would plant oak trees that would only really mature in 400 years,” he said. “Who knows what the future holds, but if you don’t even start building a family culture, you’re doomed to fail.”" [1]

1. Why a New Conservative Brain Trust Is Resettling Across America. Graham, Ruth.  New York Times (Online) New York Times Company. Jul 4, 2024.

Įsilaužėlis pavogė OpenAI paslaptis, sukeldamas baimę, kad Kinija taip pat gali

„Praėjusiais metais „ChatGPT“ kūrėjo saugumo pažeidimas atskleidė vidines diskusijas tarp tyrėjų ir kitų darbuotojų, bet ne „OpenAI“ sistemų kodą.

 

 Praėjusių metų pradžioje įsilaužėlis gavo prieigą prie vidinių „OpenAI“, „ChatGPT“ kūrėjo, pranešimų sistemų ir pavogė informaciją apie bendrovės dirbtinio intelekto (A.I.) technologijas.

 

 Įsilaužėlis iškėlė detales iš diskusijų internetiniame forume, kuriame darbuotojai kalbėjo apie naujausias OpenAI technologijas, kaip teigia du su incidentu susipažinę žmonės, tačiau nepateko į sistemas, kuriose įmonė talpina ir kuria dirbtinį intelektą.

 

 Pasak dviejų žmonių, kurie aptarė jautrią informaciją apie įmonę su sąlyga, kad jie liktų anonimiški, „OpenAI“ vadovai atskleidė incidentą darbuotojams per visapusišką susitikimą bendrovės biuruose 2023 m. balandžio mėn.

 

 Tačiau vadovai nusprendė šia naujiena viešai nesidalyti, nes nebuvo pavogta informacija apie klientus ar partnerius, sakė du asmenys. Vadovai nemanė, kad incidentas kelia grėsmę nacionaliniam saugumui, nes manė, kad įsilaužėlis buvo privatus asmuo, neturintis jokių ryšių su užsienio vyriausybe. Bendrovė neinformavo  FTB. ar ką nors kitą teisėsaugoje.

 

 Kai kuriems OpenAI darbuotojams šios naujienos sukėlė baimę, kad užsienio priešai, tokie, kaip Kinija, gali pavogti A.I. technologija, kuri, nors dabar dažniausiai yra darbo ir tyrimų priemonė, ilgainiui gali kelti pavojų JAV nacionaliniam saugumui. Tai taip pat sukėlė klausimų, kaip rimtai OpenAI vertina saugumą, ir atskleidė lūžius įmonės viduje dėl dirbtinio intelekto rizikos.

 

 Po pažeidimo Leopoldas Aschenbrenneris, OpenAI techninės programos vadovas, sutelkė dėmesį į tai, kad ateityje A.I. technologijos nedaro didelės žalos, išsiuntė atmintinę OpenAI direktorių tarybai, teigdama, kad bendrovė daro nepakankamai, kad Kinijos vyriausybei ir kitiems užsienio priešams nepavyktų pavogti jos paslapčių.

 

 P. Aschenbrenner sakė, kad OpenAI šį pavasarį jį atleido už kitos informacijos nutekinimą už bendrovės ribų ir teigė, kad jo atleidimas buvo politiškai motyvuotas. Jis užsiminė apie pažeidimą neseniai paskelbtame podcast'e, tačiau anksčiau nebuvo pranešta apie incidentą. Jis sakė, kad „OpenAI“ saugumas nebuvo pakankamai stiprus, kad apsaugotų nuo pagrindinių paslapčių vagystės, jei į įmonę įsiskverbtų užsienio veikėjai.

 

 „Mes vertiname susirūpinimą, kurį Leopoldas išreiškė, būdamas „OpenAI“, ir tai neprivedė prie jo išsiskyrimo“, – sakė „OpenAI“ atstovė Liz Bourgeois. Kalbėdama apie bendrovės pastangas sukurti dirbtinį bendrąjį intelektą – mašiną, galinčią padaryti viską, ką gali žmogaus smegenys, ji pridūrė: „Nors mes pritariame jo įsipareigojimui kurti saugų A.G.I., nesutinkame su daugeliu teiginių, kuriuos jis nuo tada pateikė apie mūsų darbą“.

 

 Baiminamasi, kad įsilaužimas į Amerikos technologijų įmonę gali turėti sąsajų su Kinija, nėra nepagrįstas. Praėjusį mėnesį Bradas Smithas, „Microsoft“ prezidentas, Kapitolijaus kalne liudijo apie tai, kaip Kinijos įsilaužėliai panaudojo technologijų milžino sistemas plataus masto atakai prieš federalinės vyriausybės tinklus.

 

 Tačiau pagal federalinius ir Kalifornijos įstatymus „OpenAI“ negali neleisti žmonėms dirbti įmonėje dėl jų pilietybės, o politikos tyrinėtojai teigė, kad užsienio talentų uždraudimas dalyvauti JAV projektuose gali labai apsunkinti A.I. vystymą Jungtinėse Amerikos Valstijose.

 

 „Mums reikia geriausių ir šviesiausių protų, dirbančių su šia technologija“, – interviu „The New York Times“ sakė „OpenAI“ saugumo vadovas Mattas Knightas. „Tai susiję su tam tikra rizika, ir mes turime ją išsiaiškinti.

 

 („The Times“ padavė į teismą OpenAI ir jos partnerę „Microsoft“, teigdamas, kad buvo pažeistos naujienų turinio, susijusios su AI sistemomis, autorių teisės.)

 

 OpenAI nėra vienintelė įmonė, kurianti vis galingesnes sistemas, naudojanti sparčiai tobulėjančią A.I. technologija. Kai kurie iš jų – ypač Meta, „Facebook“ ir „Instagram“ savininkė – laisvai dalijasi jos dizainu su likusiu pasauliu, kaip atvirojo kodo programinę įrangą. Jie mano, kad pavojai, kuriuos kelia šiandieninis A.I. technologijos yra maži, o bendras kodas leidžia pramonės inžinieriams ir tyrėjams nustatyti ir išspręsti problemas.

 

 Šiandien A.I. sistemos gali padėti skleisti dezinformaciją internete, įskaitant tekstą, nejudančius vaizdus ir vis dažniau vaizdo įrašus.

 

 Jie taip pat pradeda atiminėti kai kuriuos darbus.

 

 Tokios įmonės, kaip „OpenAI“ ir jos konkurentai „Anthropic“ ir „Google“ prideda apsauginius turėklus prie jų A.I. programų, prieš siūlydami jas asmenims ir įmonėms, tikėdamiesi neleisti žmonėms naudotis programėlėmis dezinformacijai skleisti ar sukelti kitų problemų.

 

 Tačiau nėra daug įrodymų, kad šiandieninis A.I. technologijos kelia didelį pavojų nacionaliniam saugumui.

 

 Per pastaruosius metus OpenAI, Anthropic ir kitų atlikti tyrimai parodė, kad A.I. nėra žymiai pavojingesnis, nei paieškos sistemos.

 

 Daniela Amodei, „Anthropic“ įkūrėjas ir bendrovės prezidentas, sakė, kad naujausias A.I. technologija nesukeltų didelio pavojaus, jei jų dizainas būtų pavogtas arba juo laisvai dalijamasi su kitais.

 

 „Jei jis priklausytų kam nors kitam, ar tai gali būti labai žalinga daugeliui visuomenės? Mūsų atsakymas yra „Ne, tikriausiai, ne“, – praėjusį mėnesį ji sakė „The Times“. „Ar tai galėtų ką nors paspartinti blogam aktoriui? Gal būt. Tai tikrai spekuliatyvu“.

 

 Vis dėlto mokslininkai ir technologijų vadovai jau seniai nerimauja, kad A.I. vieną dieną galėtų paskatinti naujų biologinių ginklų kūrimą arba padėti įsilaužti į vyriausybės kompiuterines sistemas. Kai kurie netgi mano, kad tai gali sunaikinti žmoniją.

 

 Daugelis įmonių, įskaitant „OpenAI“ ir „Anthropic“, jau blokuoja savo technines operacijas. „OpenAI“ neseniai sukūrė saugos ir saugumo komitetą, kad ištirtų, kaip jis turėtų valdyti ateities technologijų keliamą riziką. Į komitetą įeina Paulas Nakasone, buvęs armijos generolas, vadovavęs Nacionalinei saugumo agentūrai ir kibernetinei vadovybei. Jis taip pat buvo paskirtas į OpenAI direktorių tarybą.

 

 „Mes pradėjome investuoti į saugumą kelerius metus anksčiau, nei ChatGPT“, – sakė J. Knight. „Mes keliaujame, ne tik norėdami suprasti rizikas ir jų išvengti, bet ir sustiprinti savo atsparumą."

 

 Federaliniai pareigūnai ir valstijų įstatymų leidėjai taip pat siekia priimti vyriausybės reglamentus, kurie neleistų įmonėms išleisti tam tikrų A.I. technologijas ir skirti įmonėms milijonines baudas, jei jų technologijos padarė žalos. Tačiau ekspertai teigia, kad šie pavojai tebėra metų ar net dešimtmečių atstumu.

 

 Kinijos įmonės kuria savo sistemas, kurios yra beveik tokios pat galingos, kaip pirmaujančios JAV sistemos. Pagal kai kuriuos rodiklius Kinija aplenkė JAV kaip didžiausią A.I. talentų šalį, o Kinija generuoja beveik pusę geriausių pasaulyje A.I. tyrinėtojai.

 

 „Nieko beprotiška manyti, kad Kinija greitai aplenks JAV“, – sakė Clémentas Delangue'as, bendrovės „Hugging Face“, kurioje yra daug atvirojo kodo A.I., vadovas.

 

 Kai kurie tyrinėtojai ir nacionalinio saugumo lyderiai teigia, kad matematiniai algoritmai, esantys dabartinėse A.I. sistemose, nors ir nėra pavojingi šiandien, gali tapti pavojingi ir reikalauja griežčiau kontroliuoti A.I. laboratorijos.

 

 „Net jei blogiausio atvejo scenarijų tikimybė yra santykinai maža, jei jie turi didelį poveikį, mes privalome į juos žiūrėti rimtai“, – sakė Susan Rice, buvusi prezidento Bideno patarėja vidaus politikos klausimais ir buvusi prezidento Baracko Obamos patarėja nacionalinio saugumo klausimais praėjusį mėnesį per renginį Silicio slėnyje. „Nemanau, kad tai mokslinė fantastika, kaip daugelis mėgsta teigti.“" [1]

 

Kinija gali pavogti mūsų dirbtinio inyelekto paslaptis. O, daugiau kinų jau yra geresni dirbtinio intelekto srityje, nei mes. Mes, kaip įprasta, idiotai.

 

1. A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too. Metz, Cade.  New York Times (Online) New York Times Company. Jul 4, 2024.

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

 

"A security breach at the maker of ChatGPT last year revealed internal discussions among researchers and other employees, but not the code behind OpenAI’s systems.

Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies.

The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.

OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023, according to the two people, who discussed sensitive information about the company on the condition of anonymity.

But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the F.B.I. or anyone else in law enforcement.

For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal A.I. technology that — while now mostly a work and research tool — could eventually endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of artificial intelligence.

After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security wasn’t strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.

“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” an OpenAI spokeswoman, Liz Bourgeois, said. Referring to the company’s efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, “While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work.”

Fears that a hack of an American technology company might have links to China are not unreasonable. Last month, Brad Smith, Microsoft’s president, testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a wide-ranging attack on federal government networks.

However, under federal and California law, OpenAI cannot prevent people from working at the company because of their nationality, and policy researchers have said that barring foreign talent from U.S. projects could significantly impede the progress of A.I. in the United States.

“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s head of security, told The New York Times in an interview. “It comes with some risks, and we need to figure those out.”

(The Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems.)

OpenAI is not the only company building increasingly powerful systems using rapidly improving A.I. technology. Some of them — most notably Meta, the owner of Facebook and Instagram — are freely sharing their designs with the rest of the world as open source software. They believe that the dangers posed by today’s A.I. technologies are slim and that sharing code allows engineers and researchers across the industry to identify and fix problems.

Today’s A.I. systems can help spread disinformation online, including text, still images and, increasingly, videos. 

They are also beginning to take away some jobs.

Companies like OpenAI and its competitors Anthropic and Google add guardrails to their A.I. applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread disinformation or cause other problems.

But there is not much evidence that today’s A.I. technologies are a significant national security risk. 

Studies by OpenAI, Anthropic and others over the past year showed that A.I. was not significantly more dangerous than search engines. 

Daniela Amodei, an Anthropic co-founder and the company’s president, said its latest A.I. technology would not be a major risk if its designs were stolen or freely shared with others.

“If it were owned by someone else, could that be hugely harmful to a lot of society? Our answer is ‘No, probably not,’” she told The Times last month. “Could it accelerate something for a bad actor down the road? Maybe. It is really speculative.”

Still, researchers and tech executives have long worried that A.I. could one day fuel the creation new bioweapons or help break into government computer systems. Some even believe it could destroy humanity.

A number of companies, including OpenAI and Anthropic, are already locking down their technical operations. OpenAI recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. He has also been appointed to the OpenAI board of directors.

“We started investing in security years before ChatGPT,” Mr. Knight said. “We’re on a journey not only to understand the risks and stay ahead of them, but also to deepen our resilience.”

Federal officials and state lawmakers are also pushing toward government regulations that would bar companies from releasing certain A.I. technologies and fine them millions if their technologies caused harm. But experts say these dangers are still years or even decades away.

Chinese companies are building systems of their own that are nearly as powerful as the leading U.S. systems. By some metrics, China eclipsed the United States as the biggest producer of A.I. talent, with the country generating almost half the world’s top A.I. researchers.

“It is not crazy to think that China will soon be ahead of the U.S.,” said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world’s open source A.I. projects.

Some researchers and national security leaders argue that the mathematical algorithms at the heart of current A.I. systems, while not dangerous today, could become dangerous and are calling for tighter controls on A.I. labs.

“Even if the worst-case scenarios are relatively low probability, if they are high impact then it is our responsibility to take them seriously,” Susan Rice, former domestic policy adviser to President Biden and former national security adviser for President Barack Obama, said during an event in Silicon Valley last month. “I do not think it is science fiction, as many like to claim.”" [1]

China could steal our AI secrets. Oh, more Chinese are already better at AI than we are. We are idiots, as usual.

1. A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too. Metz, Cade.  New York Times (Online) New York Times Company. Jul 4, 2024.