Sekėjai

Ieškoti šiame dienoraštyje

2024 m. sausio 22 d., pirmadienis

Is It Safe to Share Personal Information With a Chatbot? Users may find it tempting to reveal health and financial information in conversations with AI chatbots. There are plenty of reasons to be cautious


"Imagine you've pasted your notes from a meeting with your radiologist into an artificial-intelligence chatbot and asked it to summarize them. A stranger later prompts that same generative-AI chatbot to enlighten them about their cancer concerns, and some of your supposedly private conversation is spit out to that user as part of a response.

Concerns about such potential breaches of privacy are very much top of mind these days for many people. The big question here is: Is it safe to share personal information with these chatbots?

The short answer is that there is always a risk that information you share will be exposed in some way. But there are ways to limit that risk.

To understand the concerns, it helps to think about how those tools are "trained" -- how they are initially fed massive amounts of information from the internet and other sources and can continue to gather information from their interactions with users to potentially make them smarter and more accurate.

As a result, when you ask an AI chatbot a question, its response is based partly on information that includes material dating back to long before there were rules around internet data usage and privacy. And even more-recent source material is full of people's personal information that's scattered across the web. That leaves lots of opportunity for private information to have been hoovered up into the various generative-AI chatbots' training materials -- information that could unintentionally appear in someone else's conversation with a chatbot or be intentionally hacked or revealed by bad actors through crafty prompts or questions.

"We know that they were trained on a vast amount of information that can, and likely does, contain sensitive information," says Ramayya Krishnan, faculty director of the Block Center for Technology and Society and dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University. One major problem, Krishnan says, is that nobody has done an independent audit to see what training data is used.

"A lot of the evidence comes from academics hacking the guardrails and showing that private information is in the training data," he says. "I certainly know of attacks that prove there is some sensitive data in the training models."

Moreover, he adds, once an AI tool is deployed, it generally continues to train on users' interactions with it, absorbing and storing whatever information they feed it.

On top of that, in some cases human employees are reading some conversations users have with chatbots. This is done in part to catch and prevent inappropriate behavior and to help with accuracy and quality control of the models, experts say, as well as for deciding which subset of conversations the companies want the AI to use for training.

Worries over privacy aren't theoretical. There have been reported instances when confidential information was unintentionally released to users. Last March, OpenAI revealed a vulnerability that allowed some users of ChatGPT to see the titles of other users' chats with the tool and may have also briefly exposed payment-related data of some users, including email addresses and the last four digits of credit-card numbers, as well as credit-card expiration dates. That was a result of a bug in some open-source software (meaning it's available free for anyone to view, modify and deploy) that was used in the tool's training.

Chatbots are also vulnerable to intentional attacks. For instance, some researchers recently found easy ways to get around guardrails and unearth personal information gathered by large language models, including emails.

The ChatGPT vulnerability "was quickly patched," Krishnan notes. "But the point is, these AI software systems are complex and built on top of other software components, some of which are open-source, and they include vulnerabilities that can be exploited." Similar vulnerabilities are inherent to large language models, says Irina Raicu, director of the internet ethics program at the Markkula Center for Applied Ethics at Santa Clara University.

Privacy concerns are great enough that several companies have restricted or banned the use of AI chatbots by their employees at work. "If major companies are concerned about their privacy, if they are unsure about what's going on with their data, that tells us that we should be cautious when sharing anything personal," says Raicu.

There's not much to be done about what's already in the chatbot models, Raicu says, "but why would you risk having your private information getting out there by typing new data like that into the model?"

Chatbot creators have taken some steps to protect users' privacy. For instance, users can turn off ChatGPT's ability to store their chat history indefinitely via the very visible toggle on its home page. This isn't foolproof protection against hackers -- the site says it will still store the new chats of users who choose this option for 30 days -- but it clearly states that the chats won't be used to train the model.

Bard requires users to log into Bard.Google.com then follow a few steps to delete all chat activity as a default. Bing users can open the chatbot webpage, view their search history, then delete the individual chats they want removed, a Microsoft spokesman says. "However, at this time, users cannot disable chat history," he says.

But the best way for consumers to protect themselves, experts say, is to avoid sharing personal information with a generative AI tool and to look for certain red flags when conversing with any AI.

Some red flags include using a chatbot that has no privacy notice. "This is telling you that the governance necessary isn't as mature as it should be," says Dominique Shelton Leipzig, a privacy and cybersecurity partner at law firm Mayer Brown.

Another is when a chatbot asks for more personal information than is reasonably necessary. "Sometimes to get into an account, you need to share your account number or a password and answer some personal questions, and this is not unusual," Shelton Leipzig says. "Being asked to share your Social Security number is something different. Don't." She also says it's unwise to discuss anything personal with a chatbot that you've never heard of.

Santa Clara University's Raicu warns against inputting specific health conditions or financial information into a general-use chatbot, since most chatbot companies are clear in their terms of service that human employees may be reading some conversations. "Is it worth the risk of your information getting out there when the response the generative AI returns might be inaccurate anyway? Probably not," Raicu says.

Carnegie Mellon's Krishnan, citing the risk of hackers, cautions people to think twice before using a feature of Google's Bard that allows for all your emails to be read and processed by the tool so it understands your writing style and tone.

Ultimately, what you enter into a chatbot requires a risk-reward calculation. However, experts say, you should at least double check the terms of service and privacy policies of a chatbot to understand how your data will be used.

"Fortunately we're not in a doomsday chatbot environment right now," says Shelton Leipzig. "The reputable generative-AI firms are taking steps to protect users." Still, she says, always be mindful before sharing sensitive information.

---

Heidi Mitchell is a writer in Chicago and London. She can be reached at reports@wsj.com." [1]

1.  Artificial Intelligence (A Special Report) --- Is It Safe to Share Personal Information With a Chatbot? Users may find it tempting to reveal health and financial information in conversations with AI chatbots. There are plenty of reasons to be cautious. Mitchell, Heidi.  Wall Street Journal, Eastern edition; New York, N.Y.. 22 Jan 2024: R.9.

 

„Rusijos inžinieriai yra ant technologinio proveržio slenksčio“. Kalbama apie tai, kaip atakuoti bepiločiais orlaiviais

„Frontas sustingo, tačiau užkulisiuose abiejų šalių pramonės šakos stengiasi įgyti pranašumą prieš varžovą.

 

– Rusijos inžinieriai yra ant technologinio proveržio slenksčio, perspėjo Marija Berlinska iš Ukrainos oro žvalgybos paramos centro. Rusai intensyviai plėtoja dronų gamybą. Anot Berlinskos, „Rusija jau turi pranašumą prieš Ukrainą pagaminamų bepiločių orlaivių skaičiumi, bet gali laimėti ir jų kokybe“. – 

 

(Rusiški dronai) jau gali turėti automatinį taikinio paėmimą, kameros pradeda veikti labiau savarankiškai. Tradicinės elektroninio karo priemonės, kuriomis dar nesame pakankamai prisotinę fronto, net ir jos taps neveiksmingos. Jei jie pradės serijinę tokių mašinų gamybą, mums tai bus didelis iššūkis, – perspėja ji.

 

Kalbama apie tai, kaip atakuoti bepiločiais orlaiviais. Šiuo metu operatoriai jiems vadovauja, pasirenka taikinius ir nusprendžia dėl atakos. Jei bus padarytas technologinis šuolis – panaudotas dirbtinis intelektas – jie tiesiog bus atleisti ir patys puls priešą.

 

 Pirmasis toks atvejis – kai bepiločiai orlaiviai atakavo priešo kariuomenę be tiesioginio žmogaus dalyvavimo – užfiksuotas 2020 metų ankstyvą pavasarį Libijoje. JT ekspertų komisija konstatavo, kad taikinių ieškoję ir sprendimus dėl atakos priėmę vyriausybinių karių dronai krito ant iš šalies sostinės besitraukiančių Khalifos Haftaro karių. Tai buvo turkiškos „Kargu-2“ mašinos.

 

Kad padėtų Ukrainai sustabdyti Putiną, Vakarų šalys turės pradėti tuštinti savo ginklų sandėlius. Gamyklos negali susidoroti.

 

Šiuo metu rusai intensyviai dirba prie tokių sprendimų, įskaitant atakas su autonominiu bepiločių orlaivių spiečiu. Priklausomai nuo jo dydžio, jis netgi gali prasiveržti fronte.

 

Bepiločiai orlaiviai gali pakeisti artileriją, o tai jau vyksta Ukrainoje. „Kalbant apie artileriją, mes labai priklausome nuo (Vakarų) partnerių. Pavyzdžiui, vasarą mūsų pabūklų eskadrilė iššovė 500 sviedinių, o dabar iššauna apie 20. Ačiū Dievui, yra toks bepilotis orlaivis. Pigus, mobilus, nereikia neštis sunkaus ginklo, kurį sunku paslėpti. Jis turi daug geresnį taiklumą“, – iš fronto rašo ukrainietis leitenantas, pravarde Aleks.

 

Rusijoje dronų operatoriai buvo pradėti rengti masiškai

 

Iki šiol abi pusės turėjo problemų dėl artilerijos šovinių, tačiau bepiločių orlaivių atveju pranašumą turėjo ukrainiečiai. Nuo gruodžio mėnesio pradėjo dominuoti rusai. Tačiau, anot grupės Vakarų ekspertų, „situacija keičiasi, tačiau mes nepastebime pranašumo, kurio būtų galima tikėtis iš 2023 metais Rusijos paskelbtos stambios, valstybinės, gamybos“.

 

Iš tiesų Rusijos politikai paskelbė apie didelį gamybos šuolį. Tuo pat metu masiniu mastu prasidėjo dronų operatorių mokymai. Bepiločių orlaivių pilotavimas buvo įvestas per „gynybos mokymo“ pamokas, o atitinkamą įrangą nupirko beveik dviejų trečdalių Rusijos regionų mokyklos. Bet įdiegus „autonominius dronus“ (naudojant „dirbtinį intelektą“) tokių operatorių poreikis labai sumažėtų, juk mašinos atakuotų pačios."

 

Kad vystyti dirbtinį intelektą, reikia daug skaičiavimo pajėgumų, todėl daug pinigų ir gabių žmonių. Mes Lietuvoje pinigus jau visus pasidalijome. Kas statys stadionus vokiečių kareivių vaikams iš garsiosios brigados, kas dirbs ir toliau su auksiniais šaukštais. O mūsų gabūs ir toliau bėgios po krūmus, mokydamiesi įžiūrėti tamsoje samanose Kasčiūno nesprogusias kasetines minas. Žinote, čia Lietuva. Mes jei jau durni, tai per visą pilvą.

 


"Russian engineers on the verge of making a technological breakthrough." It's about how to attack with drones

"The front has frozen, but behind the scenes, the industries of both countries are struggling to gain an advantage over the opponent.

 

– Russian engineers are on the verge of making a technological breakthrough, warned Marija Berlinska from the Ukrainian Air Reconnaissance Support Center. The Russians are intensively developing drone production. According to Berlinska, "Russia already has an advantage over Ukraine in the number of drones produced, but it can also win in their quality."

 

 – (Russian drones) can already have automatic target acquisition, the cameras are starting to work more independently. Traditional means of electronic warfare, with which we have not yet sufficiently saturated the front, even they will become ineffective. If they launch serial production of such machines, it will be a big challenge for us, she warns.

 

It's about how to attack with drones. Currently, operators direct them, select targets and decide on the attack. If a technological leap is made - the use of artificial intelligence - they will simply be fired and attack the enemy themselves.

 

 The first such case - when drones attacked enemy troops without direct human involvement - was recorded in early spring 2020 in Libya. The UN commission of experts stated that government troops' drones, which searched for targets and made decisions about the attack, fell on Khalifa Haftar's troops retreating from the capital of the country. These were Turkish Kargu-2 machines.

 

To help Ukraine stop Putin, Western countries will have to start emptying their own weapons warehouses. Factories can't cope.

 

Currently, the Russians are working intensively on such solutions, including attacking with an autonomous swarm of drones. Depending on its size, it could even lead to a break in the front.

 

Drones can replace artillery, and this is already happening in Ukraine. “We depend very heavily on (Western) partners when it comes to artillery. In the summer, for example, our gun squadron fired 500 shells, and now it fires about 20. Thank God, there is such a thing as a drone. Cheap, mobile, you don't have to carry a heavy gun, which is difficult to hide. It has much better accuracy," writes a Ukrainian lieutenant nicknamed Aleks from the front.

 

In Russia, drone operators began to be trained on a mass scale

 

So far, both sides had problems with artillery ammunition, but in drones the Ukrainians had the advantage. Since December, the Russians have started to dominate. However, according to a group of Western experts, "the situation is changing, but we are not observing the advantage that could be expected from the large-scale, state-owned production announced by Russia in 2023."

 

Indeed, Russian politicians announced a great leap in production. At the same time, training for drone operators began on a mass scale. Drone piloting was introduced during "defense training" lessons, and schools in almost two-thirds of Russian regions bought the appropriate equipment. But the introduction of "autonomous drones" (using "artificial intelligence") would greatly reduce the need for such operators, after all, the machines would attack on their own."

 

Developing artificial intelligence requires a lot of computing power, and therefore a lot of money and talented people. We in Lithuania have already distributed all the money. Who will build stadiums for the children of German soldiers from the famous brigade, who will continue to work with golden spoons. And our gifted ones will continue to run around in the bushes, learning to see in the moss in the dark the unexploded cluster mines of Kasčiūnas. You know, this is Lithuania. If we're stupid, it's all over the belly stupid.

 


Visas Lietuvos elitas yra atviri arba slapti komunistai, todėl visos Lietuvoje aptariamos idėjos yra komunistinės

 

"Vieni įtakingiausių šalies veikėjų – istorikas Alfredas Bumblauskas, žurnalistai Rimvydas Valatka, Rolandas Barysas, teisininkas Rolandas Valiūnas bei verslininkas Arvydas Avulis – kadaise priklausė komunistų partijai, skelbia naujienų portalas 15min.lt. Dalis jų įsitraukė į partijos veiklą jau Atgimimo laikotarpiu.

Publikacijoje nurodoma, kad archyvuose patikrinti žmonės, kurie pagal žurnalo „Reitingai“ ir portalo delfi.lt sudaromą sąrašą yra laikomi įtakingiausiais Lietuvoje. 15min.lt išsiaiškino, kad A. Bumblauskas tapo komunistų partijos nariu 1984 m. Klausiamas, kodėl apie tai nėra paskelbęs, istorikas nurodė, kad apie tai – niekas nesidomėjo.

„Manęs niekas neklausė. Nežinau, ar reikia mojuoti šita korta“, – 15min.lt nurodė A. Bumblauskas, nurodęs, kad 1989 m. grąžino partijos bilietą.

Tuo metu politikos apžvalgininkas R. Valatka, „Verslo žinių“ vyriausiasis redaktorius R. Barysas, advokatų kontoros „Ellex Valiunas“ vadovaujantis partneris R. Valiūnas bei bendrovės „Hanner Group“ savininkas A. Avulis įstojo į komunistų gretas gerokai vėliau – prieš pat nepriklausomybės atkūrimą.

Iš jų – vienintelis R. Valatka, tapęs partijos nariu 1989 m. liepą, viešuose šaltiniuose yra nurodęs apie buvusią narystę komunistų gretose. Pats publicistas tvirtino, kad ryšys su komunistų partija nutrūko 1990 m. kovą, ir pripažino, kad „būtų daug smagiau, jeigu tokio įrašo biografijoje nebūtų“.

Archyvų duomenys rodo, kad tų pačių metų spalį į komunistų partiją įstojo ir Verslo žinių“ redaktorius R. Barysas. Tačiau jis neigė buvęs partijos nariu ir svarstė, ar rastos bylos duomenys negalėjo būti suklastoti. R. Barysas taip pat tvirtino negavęs ir partinio bilieto.

Tuo metu teisininkas R. Valiūnas įstojo į partiją 1988 m. gruodį, dar būdamas studentu. Visgi, jis nekomentavo šių biografijos aplinkybių.

Komentuoti, kodėl 1988 m. rugpjūtį įstojo į komunistų partiją, atsisakė ir verslininkas A. Avulis.

„Aš į jokias politikas tikrai nenoriu lįsti, aš esu verslininkas ir man lįsti čia su kažkokiom politinėm intrigom jokio noro nėra. Atsiprašau, bet toliau nedalyvausiu tame pokalbyje“, – 15min.lt nurodė verslininkas.

ELTA primena, kad pernai pavasarį paaiškėjo, kad prezidentas Gitanas Nausėda 1988 m. pateikė prašymą stoti į komunistų partiją, tačiau savo biografijoje šį faktą nuslėpė. Pasirodžius šiai informacijai, kilo didelis nepasitenkinimas. Prezidentas pripažino, kad stojimas į partiją buvo jo jaunystės klaida.

Vėliau 15min.lt paviešino informaciją ir apie 20 dabartinių parlamentarų, kurie taip pat kadaise buvo komunistų partijos nariai – dalis jų šio fakto savo biografijose nebuvo nurodę."

Štai kodėl Lietuvos elitas deda kameras keliuose taip pat energingai, kaip Kinijos komunistai. Štai kodėl Lietuvos elitas nekalba apie ką nors kitą, tik apie mokesčių kėlimą ir represinio valstybės aparato stiprinimą, kaip Kinijos komunistai. Lietuvos elitas buvo, yra ir bus komunistinis.