Google’s new Scholar Labs search uses AI to find relevant studies

Google has announced that it is testing a new AI-powered search tool, Scholar Labs, designed to answer a wide range of research questions. But its performance highlighted a larger question about finding “good” science studies. How much will scientists trust a tool that abandons typical methods of assessing the popularity of studies with the scientific establishment in favor of reading relationships between terms to help uncover good research?

The new search tool uses AI to identify key topics and relationships in a user’s query and is currently available to a limited group of logged-in users. A question about brain-computer interfaces (BCIs) was shown in Scholar Labs’ demo video. I have a PhD in BCI, so I was curious to see what Scholar Labs did.

The first result was a review paper of BCI research published in 2024 in the journal applied SciencesScholar Labs includes an explanation of why results match the query, so it’s explained that the paper discusses research into a non-invasive signal called the electroencephalogram and surveys some of the leading algorithms in the field,

Scholar Labs uses AI to surface science papers that Google says best match a user's research question.

Scholar Labs uses AI to surface science papers that Google says best match a user’s research question.
Screenshot: Google Scholar Labs

But I noticed that Scholar Labs lacks filters for the common metrics used to separate “good” studies from “not so good.” One metric is how many times a study has been cited by other studies since its publication, which roughly translates to a paper’s popularity. It is also associated with time: a study published recently may have zero citations or hundreds of citations within a few months; A study from the 90s may put the number in the thousands. Another metric is the “impact factor” of a science journal. Journals that publish widely cited studies have a higher impact factor and thus have a reputation for being more rigorous or meaningful to the scientific community. applied Sciences Self-report has an impact factor of 2.5. NatureFor comparison, let’s say its impact factor is 48.5.

The basic Google Scholar has an option to rank studies by “relevance” and lists the number of citations for each result. The goal of the new Scholar Labs is to find “the most useful papers for a user’s research search,” said Lisa Oguike, a Google spokeswoman. The Verge Google says it does this by ranking papers in the same way researchers do, “by weighing the full text of each document, where it was published, who wrote it, as well as how often and recently it has been cited in other scholarly literature.”

However, the new Scholar Labs will not sort or limit results based on a paper’s citation count or journal’s impact factor, Oguike explained. The Verge,

Google Scholar Labs logo on white background.

Image: Google Scholar

“Impact factors and citation counts depend on the research area of ​​papers and it may be difficult for most users to estimate appropriate values ​​in the context of specific research questions,” Oguike wrote. “Limiting by impact factor or citation count can often miss key papers – in particular, papers in interdisciplinary/adjacent fields/journals or recently published articles,” Oguike said.

Matthew Schrag, associate professor of neurology at Vanderbilt University Medical Center, said in an interview that metrics such as citation count and impact factor are “very rough assessments of the quality of a paper.” The VergeAgree with Google’s statement. It “talked more about the social context of the paper” rather than its quality, although “hopefully those two things are correlated,” he said.

Schrag, who researches Alzheimer’s disease, is one of several scientist-spies who have flagged questionable data in published science studies. The efforts of data sleuths like Schrag and the close attention of the science community at large have resulted in studies being pulled from well-known journals due to manipulated images, corrections issued by Nobel laureates, and federal investigations into falsified data.

Still, it’s difficult No Use the citation count or reputation of a journal to casually check a study, especially when entering a new field. James Smoliga, a professor of rehabilitation sciences at Tufts University who is a frequent user of the original Google Scholar, himself considers highly cited papers to be more trustworthy. “I’m guilty of it like everyone else,” he said. The VergeHe does this despite dismissing the methods used in a study with thousands of citations, “And I know myself that’s not the case but I still fall into that trap because what else am I going to do?”

I repeated the Scholar Labs demo query about BCI research for stroke patients in PubMed, a major repository of biomedical and health research run by the US National Institutes of Health National Library of Medicine. Unlike Scholar Labs, PubMed relies extensively on filters and associated terms OrSand AndS. I limited my results to a review of articles of clinical research conducted only on humans in the last five years. I excluded preprints, which are studies posted directly to paper repositories like arXiv or bioRxiv without going through the review process by other scientists. Two of the six results focused specifically on the electroencephalogram, as the primary type of non-invasive BCI used to help stroke patients.

Webpage of PubMed Scientific Studies repository listing results for a question about brain-computer interface research.

PubMed allows users to filter search results based on factors such as time, article type, and peer-review.
Screenshot: PubMed

Users will be able to ask for “recent” papers and specify a period of time in their request, and Scholar Labs uses the “full-text of research papers” to find results that match user queries, Oguike said.

Google is calling Scholar Labs a “new direction for us” and says it plans to incorporate user feedback in the future. There is a waiting list for admission.

Schrag believes that AI-powered discovery, like the new Scholar Labs, also has a place in the scientific ecosystem. In theory, he said, this could cast a wider net to surface papers that might otherwise slip through the cracks, or add additional context about a newspaper’s popularity on social media platforms. Studies need to evaluate holistically, he said, which AI may be able to address. “You have to understand what the standards are in terms of rigor in the field and whether a study meets that standard,” he said.

Ultimately, scientists are responsible for determining what science is influential, Schrag said. This requires reading and engaging with the science literature “to be the final arbiter and not let algorithms become the final arbiter of what we consider to be high quality.”

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.




Leave a Comment