Different AI-generated figures within the paper comprise equally plentiful textual and visible nonsense, together with cell diagrams that look much more like alien pizzas with labels to match.
It’s unclear how this all bought by way of the enhancing, peer evaluation, and publishing course of. Motherboard contacted the paper’s U.S.-based reviewer, Jingbo Dai of Northwestern College, who stated that it was not his accountability to vet the clearly incorrect photos. (The second reviewer is predicated in India.)
“As a biomedical researcher, I solely evaluation the paper primarily based on its scientific facets. For the AI-generated figures, because the writer cited Midjourney, it is the writer’s accountability to make the choice,” Dai stated. “It’s best to contact Frontiers about their coverage of AI-generated figures.”
Frontier’s insurance policies for authors state that generative AI is allowed, however that it should be disclosed—which the paper’s authors did—and the outputs should be checked for factual accuracy. “Particularly, the writer is chargeable for checking the factual accuracy of any content material created by the generative AI expertise,” Frontier’s coverage states. “This consists of, however just isn’t restricted to, any quotes, citations or references. Figures produced by or edited utilizing a generative AI expertise should be checked to make sure they precisely replicate the information offered within the manuscript.”On Thursday afternoon, after the article and its AI-generated figures circulated social media, Frontiers appended a discover to the paper saying that it had corrected the article and {that a} new model would seem later. It didn’t specify what precisely was corrected. Frontiers didn’t reply to a request for remark, nor did the paper’s authors or its editor, who’s listed as Arumugam Kumaresan from the Nationwide Dairy Analysis Institute in India. The incident is the most recent instance of how generative AI has seeped into academia, a pattern that’s worrying to scientists and observers alike. On her private weblog, science integrity advisor Elisabeth Bik wrote that “the paper is definitely a tragic instance of how scientific journals, editors, and peer reviewers could be naive—or probably even within the loop—by way of accepting and publishing AI-generated crap.”“These figures are clearly not scientifically right, but when such botched illustrations can go peer evaluation so simply, extra realistic-looking AI-generated figures have probably already infiltrated the scientific literature. Generative AI will do severe hurt to the standard, trustworthiness, and worth of scientific papers,” Bik added. The educational world is slowly updating its requirements to replicate the brand new AI actuality. Naturefor instance, banned the usage of generative AI for photos and figures in articles final yr, citing dangers to integrity. “As researchers, editors and publishers, all of us must know the sources of knowledge and pictures, in order that these could be verified as correct and true. Current generative AI instruments don’t present entry to their sources in order that such verification can occur,” an editorial explaining the choice acknowledged.