A framework for evaluating the chemical knowledge and reasoning abilities of large language models against the expertise of chemists

Large language models (LLMs) have gained widespread interest owing to their ability to process human language and perform tasks on which they have not been explicitly trained. However, we possess only a limited systematic understanding of the chemical capabilities of LLMs, which would be required to...

Full description

Saved in:
Bibliographic Details
Published inNature chemistry Vol. 17; no. 7; pp. 1027 - 1034
Main Authors Mirza, Adrian, Alampara, Nawaf, Kunchapu, Sreekanth, Ríos-García, Martiño, Emoekabu, Benedict, Krishnan, Aswanth, Gupta, Tanya, Schilling-Wilhelmi, Mara, Okereke, Macjonathan, Aneesh, Anagha, Asgari, Mehrdad, Eberhardt, Juliane, Elahi, Amir Mohammad, Elbeheiry, Hani M., Gil, María Victoria, Glaubitz, Christina, Greiner, Maximilian, Holick, Caroline T., Hoffmann, Tim, Ibrahim, Abdelrahman, Klepsch, Lea C., Köster, Yannik, Kreth, Fabian Alexander, Meyer, Jakob, Miret, Santiago, Peschel, Jan Matthias, Ringleb, Michael, Roesner, Nicole C., Schreiber, Johanna, Schubert, Ulrich S., Stafast, Leanne M., Wonanke, A. D. Dinga, Pieler, Michael, Schwaller, Philippe, Jablonka, Kevin Maik
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 01.07.2025
Nature Publishing Group
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Large language models (LLMs) have gained widespread interest owing to their ability to process human language and perform tasks on which they have not been explicitly trained. However, we possess only a limited systematic understanding of the chemical capabilities of LLMs, which would be required to improve models and mitigate potential harm. Here we introduce ChemBench, an automated framework for evaluating the chemical knowledge and reasoning abilities of state-of-the-art LLMs against the expertise of chemists. We curated more than 2,700 question–answer pairs, evaluated leading open- and closed-source LLMs and found that the best models, on average, outperformed the best human chemists in our study. However, the models struggle with some basic tasks and provide overconfident predictions. These findings reveal LLMs’ impressive chemical capabilities while emphasizing the need for further research to improve their safety and usefulness. They also suggest adapting chemistry education and show the value of benchmarking frameworks for evaluating LLMs in specific domains. Large language models are increasingly used for diverse tasks, yet we have limited insight into their understanding of chemistry. Now ChemBench—a benchmarking framework containing more than 2,700 question–answer pairs—has been developed to assess their chemical knowledge and reasoning, revealing that the best models surpass human chemists on average but struggle with some basic tasks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1755-4330
1755-4349
1755-4349
DOI:10.1038/s41557-025-01815-x