Oxford researchers use AI to combat climate misinformation

University of Oxford graduate students Alba Su debuts the data processing system ClimateViz to train AI models on using verified scientific facts for accurate climate communications in an era of misinformation.

By Rachel Duckett and Julia Levy
Medill News Service, Feb. 13, 2026.

Artificial intelligence can be used to sow misinformation about climate science. To help combat this, researchers at the University of Oxford developed a large benchmarking dataset, trained using science visualizations, to assist scientific fact-checking AI tools and evaluate their ability to distinguish true from false claims.

Alba Su, a graduate student in her final year of achieving a Doctor of Philosophy degree at Oxford, studies how AI models can improve climate communication. Her project, called ClimateViz, involved creating a first-of-its-kind data processing system, or “pipeline,” with scientific charts from trustworthy sources within the science community, such as NASA and NOAA, that can be used to test an AI model’s fact-checking ability to detect errors. Other scientific fact-checking benchmarks exist, but use text and tables to verify their claims, mostly ignoring charts. Images and charts are traditionally difficult for large language models to process.

In June, Su published a paper on her team’s findings with ClimateViz at the Empirical Methods in Natural Language Processing conference.

ClimateViz contains 49,862 claims linked to 2,896 expert-curated scientific charts. Each claim is annotated with one of three labels: support, refute or not enough information. The “supported” claims were provided by thousands of volunteers on Zooniverse, a citizen science project, using parameters for accuracy provided by the researchers and then verified by experts. The “refuted” claims were created by altering the true claims to exhibit common traits of climate science fallacies.

“We want to reduce bias as much as possible. So firstly, the annotation platform requires us to not record the demographic features of the annotators,” Su said. “And second, the pipelines we designed are trying to not inject our own opinions into the claims.”

Su evaluated the performance of both open-source and proprietary state-of-the art LLMs in distinguishing the supported claims from ones that are not supported. All the models had trouble with some kinds of statistical reasoning. The LLMs performed best when they had access to a chart, its caption, and a table. Asking the proprietary models to explain their reasoning also boosted its performance.

The researchers also trained the LLMs to extract important relationships from verified datasets to give accurate answers about the data. Unlike ClimateViz, many widely used LLMs, like ChatGPT, are trained with unverified data sources that are unavailable to the public.

The goals of ClimateViz are two-fold: to distill scientific visualizations for anyone to understand them and to analyze how climate science is translated for the public. Su intends ClimateViz to be a tool for transparency and accuracy for scientists and non-scientists alike.

One of the scientific visualizations the ClimateViz creators used for training their large language model (LLM) to assess charts for accuracy. (Credit: ESA/ NASA/ C3S/ ECMWF).

Janet Pierrehumbert, professor of language modeling at Oxford and a co-author of Su’s paper, was at the annual Comer Climate Conference in Wisconsin in September and introduced Medill reporters to ClimateViz and, via Zoom, to Su    She said she intends to advocate for the methodology used to create ClimateViz to the greater machine learning community.

To Pierrehumbert, ClimateViz is an important step in using artificial intelligence to help combat AI “hallucinations” and the spread of misinformation. While the researchers found that current AI models lag behind human’s fact-checking performance, they say ClimateViz is a step in the right direction.

“You might hope that once you have explanations, you would be able to frame those in a way to get through to people,” Pierrehumbert said regarding the intentions of the ClimateViz outputs to a user.

The next steps, according to Su, could include expanding the dataset to better recognize non-Western scientific data and foster interdisciplinary research of language, technology, computer science and climate science. In the meantime, ClimateViz looks towards bringing its potential to users all over the world.

“This is definitely a critical step towards combating climate misinformation. But I think fundamentally, transparency is key, because we need more open data sources. We need open benchmarks and careful evaluation,” Su said. “So it’s about building systems that can be audited and explained, not pure black boxes.”

Su said she and her colleagues hope that better science communication will bring better climate policy.

Richard Alley, a prominent climate change scientist, author and professor at Penn State University, echoed that sentiment in September at the Comer Conference.

“The value of what we do to society, to empower them to make wise decisions, dwarfs the cost of what we do,” Alley said. “The need of you has never been greater, demands on you have never been greater, because we want you to do science, but then we want you to teach people and we want you to communicate.”

With models like ClimateViz, the onus wouldn’t only be on scientists to communicate about the changing climate. The public will now have access to the latest and most accurate information.

Photo at top: Univeristy of Oxford graduate student Alba Su is a co-creator of ClimateViz. (Rachel Duckett/ Medill)

Share on

Scroll to Top