Sekėjai

Ieškoti šiame dienoraštyje

2024 m. kovo 1 d., penktadienis

ChatGPT-like AIs are coming to major science search engines


"The Scopus, Dimensions and Web of Science databases are introducing conversational AI search.

 

The conversational AI-powered chatbots that have come to Internet search engines, such as Google’s Bard and Microsoft’s Bing, look increasingly set to change scientific search, too. On 1 August, Dutch publishing giant Elsevier released a ChatGPT-like artificial-intelligence (AI) interface for some users of its Scopus database, and British firm Digital Science announced a closed trial of an AI large language model (LLM) assistant for its Dimensions database. Meanwhile, US firm Clarivate says it’s working on bringing LLMs to its Web of Science database.

 

LLMs for scientific search aren’t new: tools including Elicit, scite and Consensus already have AI systems that help to summarize a field’s findings or identify top studies, relying on free scientific databases or (in scite’s case) access to paywalled research articles through partnerships with publishers. But firms that own large proprietary databases of scientific abstracts and references are now joining the AI rush.

 

Elsevier’s chatbot, called Scopus AI and launched as a pilot, is intended as a light, playful tool to help researchers quickly get summaries of research topics they’re unfamiliar with, says Maxim Khan, an Elsevier executive in London who oversaw the tool’s development. Users ask natural-language questions; in response, the bot uses a version of the LLM GPT-3.5 to return a fluent summary paragraph about a research topic, together with cited references and further questions to explore.

 

One concern about using LLMs for search — especially scientific search — is that they are unreliable. The models don’t understand the text they produce; they work simply by spitting out words that are stylistically plausible on the basis of the data they were trained on. Their output can contain factual errors and biases and, as academics have quickly found, can make up non-existent references.

 

Scopus AI is therefore constrained: it has been prompted to generate its answer only by reference to five or ten research abstracts. And it doesn’t find those abstracts itself — rather, after the user has typed in a query, a conventional search engine returns relevant papers, explains Khan.

Fake facts

 

Many AI search-engine systems adopt a similar strategy, notes Aaron Tay, a librarian at Singapore Management University who follows AI search tools. This is sometimes termed retrieval-augmented generation, because the LLM is limited to summarizing relevant information that another search engine retrieves. “The LLM can still occasionally hallucinate or make things up,” says Tay, pointing to research on Internet search AI chatbots, such as Bing and perplexity.ai, that use a similar technique but often return sentences not supported by citations.

 

Elsevier has limited its AI product to searching for articles published since 2018, so as to pick up only recent papers, and has instructed its chatbot to cite the returned abstracts appropriately in its reply, to avoid answering unsafe or malicious queries, and to state if there’s no relevant information in the abstracts it receives. This can’t eliminate mistakes, but minimizes them. Elsevier has also cut down on unpredictability by picking a low setting for the bot’s ‘temperature’ — a measure of how often it chooses to deviate from the most plausible words in its response.

 

Might users simply copy and paste the bot’s paragraphs into their own papers, effectively plagiarizing the tool? That’s a possibility, says Khan. Elsevier has so far tackled this with guidance that asks researchers to use the summaries responsibly, he says. Khan points out that funders and other publishers have issued similar guidance, asking for transparent disclosure if LLMs are used to, for instance, write papers or conduct peer reviews. Some state that LLMs shouldn’t be used at all.

 

For the moment, the tool is being rolled out to only around 15,000 users, a subset of Scopus subscribers, with other researchers invited to contact Elsevier if they want to try it. The firm says it expects a full launch in early 2024.

Full-text analysis

 

Also on 1 August, Digital Science announced that it was introducing an AI assistant for its large Dimensions database of research publications, although the assistant is currently available only to selected beta testers. It operates in a similar way to Scopus AI: after a user types in their question, a search engine retrieves relevant articles. A GPT model then generates a summary paragraph based on the top-ranked abstracts.

 

“It’s remarkably similar, funnily enough,” says Christian Herzog, the chief product officer for the firm, which is based in London. (Digital Science is part of Holtzbrinck Publishing Group, the majority shareholder in Nature’s publisher, Springer Nature.)

 

Dimensions also uses the LLM to provide some more details about relevant papers, including short rephrased summaries of their findings.

 

Herzog says the firm hopes to release its tool more widely by the end of the year, but for the moment it is working with scientists, funders and others who use Dimensions to test where LLMs might be useful — which remains to be seen. “This is about gradually easing into a new technology and building trust,” he says.

 

Tay says he’s looking forward to tools that use LLMs on the full text of papers, not just abstracts. Tools such as Elicit already let people use LLMs to answer detailed questions about the full text of papers that the bots have access to, such as some open-access articles, he notes.

 

At Clarivate in Philadelphia, Pennsylvania, meanwhile, Bar Veinstein, president of the firm’s academia and government section, says the company is “working on adding LLM-powered search in Web of Science”, referring to a strategic partnership signed with AI21 Labs, based in Tel Aviv, Israel, that was announced in June. Veinstein did not give a timeline for the release of this tool." [1]

 

1. Nature 620, 258 (2023)

Komentarų nėra: