Why scientists trust AI too much — and what to do about it – Nature.com

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
You have full access to this article via your institution.

AI-run labs have arrived — such as this one in Suzhou, China.Credit: Qilai Shen/Bloomberg/Getty
Scientists of all stripes are embracing artificial intelligence (AI) — from developing ‘self-driving’ laboratories, in which robots and algorithms work together to devise and conduct experiments, to replacing human participants in social-science experiments with bots1.
Many downsides of AI systems have been discussed. For example, generative AI such as ChatGPT tends to make things up, or ‘hallucinate’ — and the workings of machine-learning systems are opaque.

Artificial intelligence and illusions of understanding in scientific research
In a Perspective article2 published in Nature this week, social scientists say that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. The authors argue that this put researchers in danger of overlooking the tools’ limitations, such as the potential to narrow the focus of science or to lure users into thinking they understand a concept better than they actually do.
Scientists planning to use AI “must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline”, write co-authors Lisa Messeri, an anthropologist at Yale University in New Haven, Connecticut, and Molly Crockett, a cognitive scientist at Princeton University in New Jersey.
The peer-reviewed article is a timely and disturbing warning about what could be lost if scientists embrace AI systems without thoroughly considering such hazards. It needs to be heeded by researchers and by those who set the direction and scope of research, including funders and journal editors. There are ways to mitigate the risks. But these require that the entire scientific community views AI systems with eyes wide open.

ChatGPT is a black box: how AI research can break it open
To inform their article, Messeri and Crockett examined around 100 peer-reviewed papers, preprints, conference proceedings and books, published mainly over the past five years. From these, they put together a picture of the ways in which scientists see AI systems as enhancing human capabilities.
In one ‘vision’, which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analysing vast and complex data sets. In the fourth, AI as Surrogate, AI tools simulate data that are too difficult or complex to obtain.
Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth3, in which people relying on another person — or, in this case, an algorithm — for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.

How to stop AI deepfakes from sinking society — and science
Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test — the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviours that can be simulated by an AI — and discourage those on behaviours that cannot, such as anything that requires being embodied physically.
There’s also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data. “There’s a risk that we forget that there are certain questions we just can’t answer about human beings using AI tools,” says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.
If you’re a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just don’t have, says Crockett.
Journal editors receiving submissions in which use of AI systems has been declared need to consider the risks posed by these visions of AI, too. So should funders reviewing grant applications, and institutions that want their researchers to use AI. Journals and funders should also keep tabs on the balance of research they are publishing and paying for — and ensure that, in the face of myriad AI possibilities, their portfolios remain broad in terms of the questions asked, the methods used and the viewpoints encompassed.
All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed. For decades, and long before AI was a reality for most people, social scientists have studied AI. Everyone — including researchers of all kinds — must now listen.
Nature 627, 243 (2024)
doi: https://doi.org/10.1038/d41586-024-00639-y
Grossmann, I. et al. Science 380, 1108–1109 (2023).
Article  PubMed  Google Scholar 
Messeri, L. & Crockett, M. J. Nature 627, 49–58 (2024).
Article  Google Scholar 
Rozenblit, L. & Keil, F. Cogn. Sci. 26, 521–562 (2002).
Article  PubMed  Google Scholar 
Download references
Reprints and permissions
Artificial intelligence and illusions of understanding in scientific research
ChatGPT is a black box: how AI research can break it open
What the EU’s tough AI law means for research and ChatGPT
There are holes in Europe’s AI Act — and researchers can help to fill them
How to stop AI deepfakes from sinking society — and science
No installation required: how WebAssembly is changing scientific computing
Technology Feature
Is ChatGPT making scientists hyper-productive? The highs and lows of using AI
News Explainer
Generative AI’s environmental costs are soaring — and mostly secret
World View
Millions of research papers at risk of disappearing from the Internet
Gender bias is more exaggerated in online images than in text
News & Views
Computers make mistakes and AI will make things worse — the law must recognize that
Embrace AI to break down barriers in publishing for people who aren’t fluent in English
Could AI-designed proteins be weaponized? Scientists lay out safety guidelines
AI-generated images and video are here: how could they shape research?
News Explainer
Cincinnati Children’s Hospital Medical Center is seeking a director for the Division of Infectious Diseases.
Cincinnati, Ohio
Cincinnati Children's Hospital & Medical Center
Cincinnati Children’s seeks a director of the Division of Translational and Clinical Pharmacology.
Cincinnati, Ohio
Cincinnati Children's Hospital & Medical Center
Tenure-track in the field of Mechanism Based Clinical Trials in Oncology
Milan City (IT)
IFOM ETS – The AIRC Institute of Molecular Oncology
Postdoctoral Fellow – Thoracic and Head &Neck Medical Oncology – KRAS-mutant non-small cell lung cancer (NSCLC)
Houston, Texas (US)
The University of Texas MD Anderson Cancer Center
Biologie & Chemie ; Labor/MTA/CTA/RTA/PTA
Hamburg (DE)
Personalwerk GmbH
You have full access to this article via your institution.

Artificial intelligence and illusions of understanding in scientific research
ChatGPT is a black box: how AI research can break it open
What the EU’s tough AI law means for research and ChatGPT
There are holes in Europe’s AI Act — and researchers can help to fill them
How to stop AI deepfakes from sinking society — and science
An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print)
© 2024 Springer Nature Limited


Leave a Reply

The Future Is A.I. !
To top