From spreading misinformation to automated science
Just a little while ago in February 2024, a publication was retracted from the well-reputed Frontiers in Cell and Developmental Biology journal. While the paper aimed to identify the mechanisms of a signalling pathway in a rodent’s sperm stem cells, it included AI-generated figures that were concerningly inaccurate. The most shocking of all included a rat’s penis and testicles that were disproportionately larger than the rest of its body along with inaccurate figures of a rat cell. While the authors of the paper from Xi’an Honghui Hospital and Xi’an Jiaotong University in China credited the AI used to generate the images, it is unclear how these largely flawed images slipped through the peer review process. After readers expressed concerns, the journal retracted the paper stating that “concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigour for Frontiers…”. This incident has re-sparked conversations within the scientific community about the role of AI in making scientific discoveries. While students use ChatGPT more and more to write their homework, can scientists do the same? How will this affect research and will it uphold science’s commitment to unveiling truth?
The above article is a good case where a publishing company was able to catch misinformation before it spread. However, what about a typical person’s ability to detect misinformation? In a study published in Science Advances, authors aimed to evaluate if participants who were not specifically trained in recognising disinformation could distinguish it from accurate information. Specifically, whether they could recognize if a tweet was synthetically generated by GPT-3 or by a Twitter user. Interestingly, the authors found that participants could recognise if information is accurate more often in tweets generated by GPT-3 compared to human tweets, suggesting that GPT-3 does a better job at conveying information clearly. Their findings also show that humans can identify disinformation with a 90 percent accuracy rate and evaluate the accuracy of information with a 78 percent success rate. It’s important to note that study participants were mostly between the ages of 42 to 76 and the majority of them have a bachelor’s degree. The paper highlights that AI has the capacity for great good and also great harm. The technology is beneficial in that it can produce accurate and more easily-digestible material while at the same time it can produce more compelling disinformation. Large language models (LLMs) such as GPT-3 are statistical models that make use of vast amounts of data to generate results. This means that disinformation is the result of the model’s training set and thus the authors suggest that information entering the training datasets should be verified and listed as a reference.
While the spread of misinformation is one of the many ways that AI is changing science, an even broader and more positive application of this technology is self-driving labs (SDL). In an SDL, AI selects new material formulations aided by robotic arms to synthesise new materials. While this technology is currently limited to discovering new materials, it relieves researchers of having to grapple with trillions of possible formulations. This greatly improves the labour productivity in science, saving time and money and allowing researchers more time to improve creative aspects such as experimental design. In fact, in April 2023, UofT was awarded Canada’s largest-ever research grant of $200 million in research funding towards Acceleration Consortium—a UofT-based network that aims to accelerate materials discovery through AI and robotics. Through this funding, autonomous labs are being built at UofT, such as in the Leslie Dan Faculty of Pharmacy where AI, automation, and advanced computing are used to iteratively test and develop material combinations for new drug formulations. Not only does this have the potential to save vast amounts of time and money, but it would also speed up the development and production of potentially life-saving drugs.
From looking at the spread of misinformation and automated science—just two ways in which AI is changing scientific discovery—it’s evident that this tool has the potential for much good and much harm. While it can dramatically increase the rate at which discoveries are made, it can also disrupt society’s ability to trust in scientific discoveries. This means it’s likely to see a large increase in research and companies that aim to decipher misinformation from accurate information and create LLMs based on revised, peer-reviewed information. How do you envision this technology affecting your studies?