Research reproducibility issues solved with AI

Researchers are increasingly concerned that the lack of reproducibility in research could lead to, among other things, inaccuracies that slow scientific production and reduce public confidence in science.

An AI-powered prediction market could provide scientists with an efficient and cost-effective tool to solve the reproducibility problem, reports a team of researchers. Image Credit: National Cancer Institute/Unsplash.

Currently, a group of scientists express that the development of a prediction market, where artificially intelligent (AI) agents provide predictions – or bets – on putative replication studies, could lead to an explainable and scalable method for predict trust in published academic work.

The replication of studies and experiments, an essential step in the scientific process, gives confidence in the results and specifies whether they can be generalized across contexts, according to Penn State University‘s (PSU) Assistant Professor of Information Science and Technology, Sarah Rajtmajer.

As experiments become increasingly multifaceted, expensive, and laborious, researchers increasingly lack the resources for robust replication efforts – often referred to as a “replication crisis.

As scientists, we want to do work and we want to know that our work is good. Our approach to help solve the replication crisis is to use AI to help predict whether a discovery would replicate if repeated and why.

Sarah Rajtmajer, Assistant Professor of Information Science and Technology, Penn State University

Rajtmajer is also a research associate at the Rock Ethics Institute and an associate at the Institute for Computational and Data Sciences.

Crowdsourced prediction markets can be compared to betting shops to help estimate everyday life events, instead of the results of football matches or horse races. These markets are already being used to help predict a wide range of things, from elections to infectious virus spreads.

What inspired us was the success of prediction markets in this exact task – that is, when you place researchers in a market and give them money to bet on the results replications, they’re pretty good at it. But human-run prediction markets are expensive and slow. And ideally, you should be running replications alongside the market so that there is ground truth that researchers are betting on. It just isn’t to scale.

Sarah Rajtmajer, Assistant Professor of Information Science and Technology, Penn State University

A bot-based method scales and provides some explainability of its results based on the business patterns and characteristics of the articles and claims that affected the behavior of the bots.

In the team’s method, the bots learn to identify essential features of scientific research articles – such as statistics, authors and institutions, and downline mentions, linguistic clues and parallel studies in the literature – then to assess the confidence that the study is strong enough to be replicated in future studies.

Similar to a human betting on the outcome of a sporting event, the bot places a bet on its confidence level. The results of AI-driven robots are akin to bets placed in human predictions.

C. Lee Giles, David Reese Professor at the College of Information Science and Technology, PSU, said that although prediction markets based on human contributors are popular and have been used effectively in a number of fields, Prediction markets are unique in examining search results. .

“That’s probably the interesting and unique thing we’re doing here,” said Giles, who is also associated with ICDS. “We have already seen that humans are quite good at using prediction markets. But, here we are using bots for our marketplace which is a bit unusual and quite fun.

According to the scientists, who reported their findings at a recent Association for the Advancement of Artificial Intelligence conference, the system offered confidence scores for around 68 of the 192 papers – or around 35%. – articles that have been reproduced in due time. , or ground truth replication research. On this set of articles, the accuracy was about 90%.

Since humans tend to better predict the reproducibility of research and bots can operate at large scale, Giles and Rajtmajer propose that a hybrid method – humans and bots pairing together – may offer the best of both worlds: a system that would include greater precision but would still be scalable.

Perhaps we can train the bots in the presence of human traders from time to time and then deploy them offline when we need a quick result or when we need large-scale replication efforts. Moreover, we can create robot markets that also take advantage of this intangible human wisdom. It’s something we’re working on right now.

Sarah Rajtmajer, Assistant Professor of Information Science and Technology, Penn State University

The PIs for the project are: Christopher Griffin, Applied Research Laboratory; James Caverlee; professor of computer science at Texas A&M; Jian Wu, assistant professor of computer science at Old Dominion University; Anthony Kwasnica, Professor of Business Economics; Anna Squicciarini, Frymoyer Chair in Information Science and Technology; and David Pennock, director of DIMACS and professor of computer science at Rutgers University.

The study received funding from DARPA’s Systematizing Confidence in Open Research and Evidence (SCORE) program.

Source: https://www.psu.edu

Comments are closed.