Last month, Google announced thatAI co-scientistThe company aims to help scientists develop hypotheses and research plans. Google uses it as a way to discover new knowledge, but experts see it as a challenge – and similar tools – without a doubt.
“This initial tool seems unlikely to be used seriously,” MIT computer vision researcher Sarah Beery told TechCrunch. “I’m not sure if there is a need for such hypothesis generation systems in the scientific community.”
Google is the latest technology giant, aiming to improve the concept that AI will one day greatly accelerate scientific research, especially in literature-intensive areas such as biomedicine. In an article earlier this yearOpenai CEO Sam Altman said that “super-smart” AI tools can “greatly accelerate scientific discovery and innovation.” Similarly, Dario Amodei, the anthropomorphic CEO, boldly predicts that AI can Helping most cancers develop treatments.
But many researchers today don’t think AI is particularly useful in guiding scientific processes. They say apps like Google’s AI co-scientist seem to be more hyped than anything.
For example, in its Blog Posts Google describes AI co-scientists say the tool has proven potential in areas where drugs such as acute myeloid leukemia, a blood cancer that affects the bone marrow. However, the results are so vague that “no legal scientist will accept it.” [them] Seriously.
“This can be used as a good starting point for researchers, but […] The lack of details is worrying and does not make me believe it. “Dubik told TechCrunch. “The lack of information makes it really hard to understand whether it is really helpful. ”
This is not the first time that the scientific community has criticized Google for exaggerating the so-called AI breakthrough without providing the means to reproduce the results.
In 2020, Google claim One of its AI systems is trained to detect breast tumors, and the results are better than those of human radiologists. Harvard and Stanford University researchers refuted in the Journal naturesays Google’s lack of detailed methods and code “breaks”[d] Its scientific value. ”
Scientists also mock Google for covering up the limitations of its AI tools for scientific disciplines such as materials engineering. In 2023, the company said there are about 40 “new materials” synthesis With the help of one of its AI systems, it is called Gnome. However, External analysis In fact, one of the materials was found to be new.
“We didn’t really understand the advantages and limitations of tools like Google’s ‘co-scientists’ until they went through rigorous independent assessments,” Ashique Khudabukhsh, assistant professor of software engineering at Rochester Polytechnic, told TechCrunch. “AI usually performs well in controlled environments but can fail when applied at scale.”
A complex process
The part of the challenge of developing AI tools to help scientific discovery is that there are many confounding factors expected. AI may come in handy in areas that require extensive exploration, such as narrowing down a large list of possibilities. However, it is not clear whether AI can achieve out-of-the-box questions that lead to scientific breakthroughs.
“We have seen that, throughout history, some of the most important scientific advances, such as the development of mRNA vaccines, are driven by human intuition and perseverance,” Khudabukhsh said. “At present day, AI may not be suitable for replicating this.”
Lana Sinapayen, an AI researcher at Sony Computer Science Lab in Japan, believes tools such as Google’s AI Joint Scientists focus on wrong scientific leg work.
Sinapayen believes that the true value of AI can automate technically difficult or tedious tasks, such as summarizing new academic literature or formatting efforts to meet the requirements of grant applications. There is not much demand for AI co-scientists who produce hypotheses, she said – many researchers have obtained tasks for intellectual realization from it.
“For many scientists, myself included, generating hypotheses is the most interesting part,” Sinapayen told TechCrunch. “Why did I outsource my fun to computers and then just my hard work left? Overall, many of the generated AI researchers seem to misunderstand what humans are doing, and we ended up proposing products that we get fun from.”
Beery notes that the hardest step in the scientific process is often the design and implementation of research and analysis to verify or refute hypotheses – which is not necessarily within the scope of current AI systems. Of course, AI cannot experiment with physical tools and is often worse when there is a problem of extremely limited data.
“In fact, most science cannot do it completely – there are often important components in the scientific process, such as collecting new data and performing experiments in the lab,” Beery said. “A major limitation of the system.” [like Google’s AI co-scientist] The context of using the system and its specific research objectives, past work, skills, and resources they can use relative to the actual scientific process that actually limits its availability. ”
Artificial Intelligence Adventure
Technical drawbacks and risks of AI – such as its trends Hallucination – Scientists must also be cautious and serious about their work.
Khudabukhsh fears that AI tools may eventually make noise in the scientific literature without boosting progress.
This is already a problem. Recent research The discovery of the “junk science” of AI production is the flood of Google’s academic literature search engine, Google Scholar.
“If not carefully monitored, AI-generated research could overwhelm the field of science with lower quality and even misleading research, which overwhelms the peer review process,” Khudabukhsh said. “In fields like computer science, the most overwhelmed peer review process has become a challenge, where submissions of the highest conferences are growing exponentially.”
Even well-designed research can end up being tainted by misbehaving AI, Sinapayen said. While she likes the idea of tools that can help with literary censorship and synthesis, Sinapayen said she doesn’t believe that AI can reliably perform the work.
“These are things that all kinds of existing tools claim to do, but these are not work that I personally have,” Sinapayen said. Many AI systems are trained and The amount of energy they consume,Too. “Even with all moral issues […] Solved, the current AI is not reliable enough to base my work on their output in one way or another. ”