Sekėjai

Ieškoti šiame dienoraštyje

2024 m. birželio 26 d., trečiadienis

ChatGPT and Generative AI Tools for Learning and Research

Abstract

 

"Many sophisticated machine learning (ML) products recently have been introduced as general-purpose content- creation tools. The one that has garnered the most attention was ChatGPT, a chatbot powered by the large language model (LLM) GPT-3.5. An LLM is a type of ML model that performs various natural language processing tasks--such as recognizing, summarizing, translating, and generating text; answering questions; and carrying on a conversation. An LLM is developed by deep learning techniques, and training its artificial neural networks requires a massive amount of data. Deep learning is a type of ML, and ML is a subfield of AI. Since ChatGPT outputs new content as a response to a user's inquiry, it is considered a tool in the realm of generative AI.

 

Full Text

 

Many sophisticated machine learning (ML) products recently have been introduced as general-purpose content-creation tools. The one that has garnered the most attention was ChatGPT, a chatbot powered by the large language model (LLM) GPT-3.5.

 

An LLM is a type of ML model that performs various natural language processing tasks-such as recognizing, summarizing, translating, and generating text; answering questions; and carrying on a conversation. An LLM is developed by deep learning techniques, and training its artificial neural networks requires a massive amount of data. Deep learning is a type of ML, and ML is a subfield of AI. Since ChatGPT outputs new content as a response to a user's inquiry, it is considered a tool in the realm of generative AI.

 

GENERATIVE AI FOR CONTENT CREATION

 

ChatGPT was launched by OpenAI on Nov. 30, 2022, and it quickly became a phenomenon. In 5 days, more than a million people signed up to try this product. Shortly thereafter, on Feb. 7, 2023, Google unveiled Bard, its own ChatGPT-like chatbot. Bard is built on top of Google's natural language processing model, called LaMDA, which stands for Language Model for Dialogue Applications. Around the same time, Microsoft debuted a new Bing chatbot as a competitor to ChatGPT and Bard.

 

In addition to generating a human-like conversation, ChatGPT and other similar AI-powered chatbots can write essays, computer code, recipes, grocery lists, and even poems. While some inaccuracies and incoherence were soon found in the responses of various chatbots (Bobby Allyn, "Microsoft's New AI Chatbot Has Been Saying Some 'Crazy and Unhinged Things,'" NPR, March 2, 2023; npr.org/2023/03/02/1159895892/ai-micro soft-bing-chatbot), there is general agreement that these AIpowered chatbots perform noticeably better than any past, non-AI chatbots.

 

While ChatGPT, Bard, and Bing chatbots generate texts as a response, DALL-E, Midjourney, and Imagen create images as an output upon receiving a user input in text. Make-A-Video generates a video that fits the description given in text, and MusicLM generates a piece of music. GitHub Copilot outputs computer code and is used as a pair programming tool. These AI tools are introducing generative AI to the public at an increasing speed.

 

WHAT IS GENERATIVE AI?

 

Generative AI refers to deep-learning algorithms that generate novel content in a variety of forms-such as text, image, video, audio, and computer code. New content thus generated can be an answer to a reference question, a step-by-step solution to a problem posed, or a machine-generated artwork, just to name a few possibilities.

 

As with any deep-learning model, developing a generative AI model requires a large volume of data for training, a large number of parameters, and a large amount of processing power. The largest model of GPT-3 was trained on 499 billion tokens of data that came from approximately 45 terabytes of compressed plain text, which is equivalent to about 1 million feet of bookshelf space or a quarter of the entire collection of the Library of Congress (Tom B. Brown et al, "Language Models Are Few-Shot Learners," arXiv, July 22, 2020; doi.org/10.48550/ arXiv.2005.14165). The largest model of GPT-3 has 175 billion parameters and would require 355 years and $4.6 million to train, even with the lowest-priced GPU cloud on the market and if you used a single GPU for it (Chuan Li, "OpenAI's GPT-3 Language Model: A Technical Overview," The Lambda Deep Learning Blog, June 3, 2020; lambdalabs.com/blog/demystify ing-gpt-3). These examples show that developing a generative AI model such as GPT-3 is resource-intensive and costly.

 

AI FOR SCIENTIFIC RESEARCH

 

AI and ML led to the rise of very powerful general-purpose content-creation tools. Another field that has actively adopted ML is scientific research. A great example in this area is AlphaFold, an AI program developed by DeepMind. DeepMind is the company that built AlphaGo, which made headlines back in 2016 by winning a game of Go in its match with the 18-time world champion Lee Sedol. (Google has owned DeepMind since 2014.)

 

AlphaFold takes a protein's genetic sequence as an input and outputs the prediction of its 3D protein structure with impressive accuracy. In July 2021, DeepMind announced that it had used AlphaFold to predict the structure of nearly every protein made by humans, as well as the entire "proteomes" of 20 other widely studied organisms, such as mice and the bacterium E. coli (Ewen Callaway, "DeepMind's AI Predicts Structures for a Vast Trove of Proteins," Nature vol. 595, no. 7869, July 22, 2021; doi.org/10.1038/d41586-021-02025-4). A proteome refers to the complete set of proteins made by an organism, such as a species or a particular organ.

 

Working in partnership with European Molecular Biology Laboratory's European Bioinformatics Institute (EMBL-EBI; ebi.ac. uk), DeepMind released more than 200 million protein structure-predictions by AlphaFold and made them freely available to the scientific community. These included nearly all cataloged proteins known to science. What AlphaFold does is significant because proteins often fold into elaborate 3D structures and even form complexes with each other to perform certain functions in a cell. For this reason, being able to predict the 3D shape of proteomes is quite valuable in life science research, even more so in drug discovery.

 

As another example of applying AI and ML to scientific research, evolutionary biologists employed facial recognition, one of the most widely used ML techniques, to track chimpanzees in the wild, which is not easily achievable by other means (Daniel Schofield et al., "Chimpanzee Face Recognition From Videos in the Wild Using Deep Learning," Science Advances vol. 5, no. 9, Sept. 4, 2019; doi.org/10.1126/sciadv.aaw0736). Beyond life sciences and evolutional biology, ML techniques are also being applied in many other disciplines such as anthropology, astronomy, astrophysics, chemistry, evolutionary biology, engineering, and meteorology.

 

GENERATIVE AI FOR LIBRARY USERS

 

What do these developments in ML and generative AI mean to libraries and library users? For one, requests for non-existing citations are finding their way to librarians at my library as students try to obtain articles cited by ChatGPT without realizing that they were simply made up. However, students are not the only ones who are falling prey to ChatGPT's so-called "hallucinations" (Karen Weise and Cade Metz, "When A.I. Chatbots Hallucinate," The New York Times, May 1, 2023; nytimes.com/2023/05/01/business/ ai-chatbots-hallucination.html). In the recent court case Roberto Mata v. Avianca Inc., a lawyer cited and quoted nonexistent cases, which he had gotten from ChatGPT, in his legal brief (Benjamin Weiser, "Here's What Happens When Your Lawyer Uses ChatGPT," The New York Times, May 27, 2023; nytimes.com/2023/05/27/ny region/avianca-airline-lawsuit-chatgpt.html).

 

As a generative AI model, ChatGPT composes its responses based upon statistical probability from the data on which it is trained. In that sense, ChatGPT is basically an autocomplete program, albeit a highly sophisticated one. What this means is that ChatGPT cannot differentiate what looks real from what is real. Nevertheless, students already seem to be using ChatGPT for their academic assignments, and instructors, such as Siva Vaidhyanathan, are well aware of this trend ("My Students Are Using AI to Cheat. Here's Why It's a Teachable Moment," The Guardian, May 19, 2023; theguardian.com/technology/2023/may/18/ai-cheatingteaching-chatgpt-students-college-universitv).

 

In the recent poll about generative AI conducted by Educause, which received 1,070 complete responses, 54% of respondents indicated that generative AI is impacting higher education and selected teaching and instructional support as the most impacted areas (Nicole Muscanell and Jenay Robert, "EDUCAUSE QuickPoll Results: Did ChatGPT Write This Report?" EDUCAUSE Review Online, Feb. 14, 2023; er.educause.edu/articles/2023/2/educausequickpoll-results-did-chatgpt-write-this-report). Since the verification of sources and original and independent thinking are essential to academic and research integrity, the uncritical use of ChatGPT and other similar generative AI tools for learning, teaching, and research purposes should be carefully considered.

 

Perhaps as an answer to spurious sources conjured up by generative AI chatbots, the U.K.'s CORE (Connecting Repositories) project (core.ac.uk) released CORE-GPT, which answers a user's question based upon information from CORE's corpus of 34 million OA scientific articles along with their citations. Like COREGPT, scite (scite.ai) also provides an answer with explicit references to published scientific research papers.

 

Other examples of AI products that aim to facilitate learning and research include Consensus, Elicit, and Librari. Consensus (consensus.app) is an AI-powered search tool that takes in research questions and finds relevant answers by extracting and distilling findings from scientific research papers in the Semantic Scholar database. Elicit (elicit.org) is an AI research assistance tool that aims to expedite the literature review process by providing a list of relevant articles and the summaries of their abstracts specific to a user's query. Librari (librari.app) promises the answering of factual questions, helping with schoolwork, providing reader advisory services, and performing creative tasks based upon its answers curated with more than 300,000 human-engineered prompts.

 

While generative AI products have achieved some remarkable feats in their performance so far, they are hardly mature yet. As Elicit's FAQ page (elicit.org/faq) explicitly states, an LLM may miss the nuance of a paper or misunderstand what a number refers to. Nevertheless, the overwhelming amount of interest in these tools suggests that they will be rather quickly adopted and utilized for a wide variety of scholarly and research activities. We are likely to learn more about what these generative AI tools are well-suited for and what we humans are better at.

 

Bohyun Kim (bhkim@umich.edu) is the associate university librarian for library information technology at the University of Michigan Library." [1]

 

1. ChatGPT and Generative AI Tools for Learning and Research. Kim, Bohyun. Computers in Libraries; Westport Vol. 43, Iss. 6,  (Jul/Aug 2023): 41-42.

 

Kai jūsų pastato prižiūrėtojas yra dirbtinio intelekto botas

„Dirbtinis intelektas (A.I.) daro viską – nuo ​​pagalbos savininkams bendrauti su nuomininkais iki energijos naudojimo valdymo.

 

 Naujasis techninės priežiūros koordinatorius daugiabučių komplekse Dalase sulaukė padėkų iš nuomininkų ir kolegų už gerą darbą ir pagalbą vėlyvą vakarą. Anksčiau aštuoni nekilnojamojo turto darbuotojai, valdantys 814 pastatų butų ir miesto namų, būdavo pervargę ir sugaišdavo daugiau valandų, nei norėjo.

 

 Be darbo viršvalandžių, naujasis komplekso darbuotojas, Cypress Waters rajonas, yra pasiekiamas 24 valandas per parą, 7 dienas per savaitę, suplanuodamas remonto užklausas ir nenaudoja jokio poilsio.

 

 Taip yra todėl, kad priežiūros koordinatorius yra dirbtinio intelekto robotas, kurį nekilnojamojo turto valdytojas Jasonas Busboomas pradėjo naudoti praėjusiais metais. Botas, siunčiantis tekstinius pranešimus, naudodamas Matto vardą, priima užklausas ir tvarko susitikimus.

 

 Komanda taip pat turi Lisa, lizingo robotą, kuris atsako į būsimų nuomininkų klausimus, ir Hunter, robotą, primenantį žmonėms mokėti nuomą.

 

 P. Busboomas pasirinko tokias asmenybes, kokių norėjo kiekvienam A.I. asistentui: Lisa yra profesionali ir informatyvi; Mattas yra draugiškas ir paslaugus; o Hunteris yra griežtas, primindamas nuomininkams mokėti nuomą, turi atrodyti autoritetingas.

 

 Pasak jo, ši technologija atlaisvino brangaus laiko p. Busboomo personalui, ir dabar visi yra daug laimingesni jų darbu. Anksčiau „kai kas nors atostogavo, tai buvo labai įtempta“, - pridūrė jis.

 

 Pokalbių robotai – taip pat kiti A.I. įrankiai, galintys sekti bendrų patalpų naudojimą ir stebėti energijos suvartojimą, padėti valdyti statybas ir atlikti kitas užduotis – nekilnojamojo turto valdyme tampa vis įprastesniu dalyku. Remiantis 2023 m. McKinsey Global Institute paskelbta ataskaita, dėl naujų technologijų sutaupyti pinigai ir laikas nekilnojamojo turto pramonei gali atnešti 110 milijardų dolerių ar daugiau. Tačiau A.I. pažanga ir jos katapultavimas į visuomenės sąmonę taip pat sukėlė klausimų, ar nuomininkai turėtų būti informuojami, kai jie bendrauja su A.I. botais.

 

 Programinės įrangos programuotojas Ray'us Wengas sužinojo, kad turi reikalų su A.I. išperkamosios nuomos agentais, ieškodamas buto Niujorke pernai, kai agentai dviejuose pastatuose naudojo tą patį vardą ir į jo klausimus atsakė tuos pačius atsakymus.

 

 „Geriau bendrauju su žmogumi“, – sakė jis. "Tai didelis įsipareigojimas pasirašyti nuomos sutartį."

 

 Ponas Wengas sakė, kad kai kurios ekskursijos po apartamentus vyko savarankiškai, o jei viskas automatizuota, atrodo, kad jiems nepakankamai rūpi, kad su manimi kalbėtų tikras žmogus.

 

 „EliseAI“, programinės įrangos įmonė, įsikūrusi Niujorke, kurios virtualiais padėjėjais naudojasi beveik 2,5 milijono butų savininkai Jungtinėse Valstijose, įskaitant kai kuriuos valdomus nekilnojamojo turto valdymo įmonės „Greystar“, siekia, kad jos padėjėjai būtų kuo žmogiškesni, sakė Minna Song, „EliseAI“ generalinis direktorius. Be to, kad robotai yra pasiekiami per pokalbius, tekstinius pranešimus ir el. paštą, jie gali bendrauti su nuomininkais balsu ir gali turėti skirtingus akcentus.

 

 Virtualūs padėjėjai, padedantys atlikti techninės priežiūros užklausas, gali užduoti tolesnius klausimus, pvz., patikrinti, kurią kriauklę reikia taisyti, jei remonto metu nuomininkas yra nepasiekiamas, sakė ponia Song, o kai kurie pradeda padėti nuomininkams pašalinti triktis priežiūros klausimais patiems. Pavyzdžiui, nuomininkai su nesandariu tualetu gali gauti pranešimą su vaizdo įrašu, kuriame parodyta, kur yra vandens uždarymo vožtuvas ir kaip juo naudotis, kol jie laukia santechniko.

 

 Ši technologija taip gerai palaiko pokalbį ir užduoda tolesnius klausimus, kad nuomininkai dažnai klysta, laikydami A.I. asistentą žmogumi. „Žmonės ateina į lizingo biurą ir klausia Elisos vardu“, – sakė ponia Song ir pridūrė, kad nuomininkai susirašinėjo žinute, siūlydami pokalbių robotui susitikti kavos, sakė vadovams, kad Elise nusipelnė atlyginimo ir netgi atnešdavo dovanų kortelių pokalbių robotui.

 

 Nesakyti klientams, kad jie bendravo su robotu, yra rizikinga. Šiaurės Vakarų universiteto komunikacijos studijų docentas Duri Longas teigė, kad dėl to kai kurie žmonės gali prarasti pasitikėjimą technologiją naudojančia įmone.

 

 Carnegie Mellon universiteto etikos ir skaičiavimo technologijų profesorius Alexas Johnas Londonas sakė, kad žmonės gali įvertinti apgaulę, kaip nepagarbą.

 

 „Atsižvelgiant į viską, geriau, kad jūsų robotas praneštų, kad jis yra kompiuteris - asistentas“, – sakė daktaras Londonas.

 

 Ponia Song teigė, kad kiekviena įmonė turi stebėti besikeičiančius teisinius standartus ir apgalvoti, ką ji sako vartotojams. Didžioji dauguma valstijų neturi įstatymų, reikalaujančių atskleisti, A.I. bendraujant su žmogumi, o egzistuojantys įstatymai pirmiausia yra susiję su balsavimo ir pardavimo įtaka, todėl robotas, naudojamas techninei priežiūrai planuoti ar priminti apie nuomos mokėjimus, neturėtų būti atskleistas klientams. (Cypress Waters rajonas nesako nuomininkams ir būsimiems nuomininkams, kad jie bendrauja su A.I. botu.)

 

Kita rizika susijusi su informacija, kad A.I. generuoja. Milena Petrova, docentė, dėstanti nekilnojamąjo turto ir įmonių finansus Sirakūzų universitete, sakė, kad žmonės turi būti „įtraukti, kad galėtų kritiškai išanalizuoti bet kokius rezultatus“, ypač kai sąveikaujama dėl paprasčiausių ir įprasčiausių problemų.

 

 Sandeep Dave, nekilnojamojo turto paslaugų įmonės CBRE vyriausiasis skaitmeninių ir technologijų pareigūnas, teigė, kad tai nepadėjo, kad A.I. „labai pasitiki savimi, todėl žmonės linkę tuo patikėti“.

 

 Maršalas Davisas, valdantis nekilnojamąjį turtą ir nekilnojamojo turto technologijų konsultavimo įmonę, stebi A.I. sistemą, kurią jis sukūrė, kad padėtų jo dviem biuro darbuotojams atsiliepti į 30–50 kasdienių skambučių 160 butų komplekse Hiustone. Pokalbių robotas puikiai tinka, atsakant į paprastus klausimus, pavyzdžiui, apie nuomos mokėjimo procedūras ar duodant išsamią informaciją apie turimus butus, sakė J. Davisas. Tačiau sudėtingesniais klausimais sistema gali „atsakyti taip, kaip, jos manymu, turėtų ir nebūtinai taip, kaip jūs to norite“, – sakė jis.

 

 J. Davis įrašo daugumą skambučių, paleidžia juos per kitą A.I. įrankį, kad juos apibendrintų ir tada klausosi tų, kurie atrodo problemiški, pavyzdžiui, „kai A.I. sako: „Klientas išreiškė nusivylimą“, – sakė jis, kad suprastų, kaip patobulinti sistemą.

 

 Kai kurie nuomininkai nėra visiškai parduoti. Jillian Pendergast bendravo su robotais praėjusiais metais, ieškodama buto San Diege. „Jiems gerai susitarti dėl susitikimų“, – sakė ji, bet bendraudama su A.I. asistentais, o ne žmonėmis, gali nusivilti, kai pradeda kartoti atsakymus.

 

 „Aš matau potencialą, bet jaučiu, kad jie vis dar yra bandymų ir klaidų fazėje“, – sakė p. Pendergast." [1]


1. When Your Building Super Is an A.I. Bot: square feet. Weed, Julie.  New York Times (Online) New York Times Company. Jun 26, 2024.

When Your Building Super Is an A.I. Bot


"Artificial intelligence is doing everything from helping landlords communicate with tenants to managing energy use.

The new maintenance coordinator at an apartment complex in Dallas has been getting kudos from tenants and colleagues for good work and late-night assistance. Previously, the eight people on the property’s staff, managing the buildings’ 814 apartments and town homes, were overworked and putting in more hours than they wanted.

Besides working overtime, the new staff member at the complex, the District at Cypress Waters, is available 24/7 to schedule repair requests and doesn’t take any time off.

That’s because the maintenance coordinator is an artificial intelligence bot that the property manager, Jason Busboom, began using last year. The bot, which sends text messages using the name Matt, takes requests and manages appointments.

The team also has Lisa, the leasing bot that answers questions from prospective tenants, and Hunter, the bot that reminds people to pay rent. 

Mr. Busboom chose the personalities he wanted for each A.I. assistant: Lisa is professional and informative; Matt is friendly and helpful; and Hunter is stern, needing to sound authoritative when reminding tenants to pay rent.

The technology has freed up valuable time for Mr. Busboom’s human staff, he said, and everyone is now much happier in his or her job. Before, “when someone took vacation, it was very stressful,” he added.

Chatbots — as well as other A.I. tools that can track the use of common areas and monitor energy use, aid construction management and perform other tasks — are becoming more commonplace in property management. The money and time saved by the new technologies could generate $110 billion or more in value for the real estate industry, according to a report released in 2023 by McKinsey Global Institute. But A.I.’s advances and its catapult into public consciousness have also stirred up questions about whether tenants should be informed when they’re interacting with an A.I. bot.

Ray Weng, a software programmer, learned he was dealing with A.I. leasing agents while searching for an apartment in New York last year, when agents in two buildings used the same name and gave the same answers for his questions.

“I’d rather deal with a person,” he said. “It’s a big commitment to sign a lease.”

Some of the apartment tours he took were self-guided, Mr. Weng said, “and if it’s all automated, it feels like they don’t care enough to have a real person talk to me.”

EliseAI, a software company based in New York whose virtual assistants are used by owners of nearly 2.5 million apartments across the United States, including some operated by the property management company Greystar, is focused on making its assistants as humanlike as possible, said Minna Song, the chief executive of EliseAI. Aside from being available through chat, text and email, the bots can interact with tenants via voice and can have different accents.

The virtual assistants that help with maintenance requests can ask follow-up questions like verifying which sink needs to be fixed in case a tenant isn’t available when the repair is being done, Ms. Song said, and some are beginning to help renters troubleshoot maintenance issues on their own. Tenants with a leaky toilet, for example, may receive a message with a video showing them where the water shut-off valve is and how to use it while they wait for a plumber.

The technology is so good at carrying on a conversation and asking follow-up questions that tenants often mistake the A.I. assistant for a human. “People come to the leasing office and ask for Elise by name,” Ms. Song said, adding that tenants have texted the chatbot to meet for coffee, told managers that Elise deserved a raise and even dropped off gift cards for the chatbot.

Not telling customers that they’ve been interacting with a bot is risky. Duri Long, an assistant professor of communication studies at Northwestern University, said it could make some people lose trust in the company using the technology.

Alex John London, a professor of ethics and computational technologies at Carnegie Mellon University, said people could view the deception as disrespectful.

“All things considered, it is better to have your bot announce at the beginning that it is a computer assistant,” Dr. London said.

Ms. Song said it was up to each company to monitor evolving legal standards and be thoughtful about what it told consumers. A vast majority of states do not have laws that require the disclosure of the use of A.I. in communicating with a human, and the laws that do exist primarily relate to influencing voting and sales, so a bot used for maintenance-scheduling or rent-reminding wouldn’t have to be disclosed to customers. (The District at Cypress Waters does not tell tenants and prospective tenants that they’re interacting with an A.I. bot.)

Another risk involves the information that the A.I. is generating. Milena Petrova, an associate professor who teaches real estate and corporate finance at Syracuse University, said humans needed to be “involved to be able to critically analyze any results,” especially for any interaction outside the most simple and common ones.

Sandeep Dave, chief digital and technology officer of CBRE, a real estate services firm, said it didn’t help that the A.I. “comes across as very confident, so people will tend to believe it.”

Marshal Davis, who manages real estate and a real estate technology consulting company, monitors the A.I. system he created to help his two office workers answer the 30 to 50 calls they receive daily at a 160-apartment complex in Houston. The chatbot is good at answering straightforward questions, like those about rent payment procedures or details about available apartments, Mr. Davis said. But on more complicated issues, the system can “answer how it thinks it should and not necessarily how you want it to,” he said.

Mr. Davis records most calls, runs them through another A.I. tool to summarize them and then listens to the ones that seem problematic — like “when the A.I. says, ‘Customer voiced frustration,’” he said — to understand how to improve the system.

Some tenants aren’t completely sold. Jillian Pendergast interacted with bots last year while searching for an apartment in San Diego. “They’re fine for booking appointments,” she said, but dealing with A.I. assistants instead of humans can get frustrating when they start repeating responses.

“I can see the potential, but I feel like they are still in the trial-and-error phase,” Ms. Pendergast said." [1]

1. When Your Building Super Is an A.I. Bot: square feet. Weed, Julie.  New York Times (Online) New York Times Company. Jun 26, 2024.