Fine-tuning large language models for chemical text mining

Extracting knowledge from complex and diverse chemical texts is a pivotal task for both experimental and computational chemists. The task is still considered to be extremely challenging due to the complexity of the chemical language and scientific literature. This study explored the power of fine-tu...

Full description

Saved in:
Bibliographic Details
Published inChemical science (Cambridge) Vol. 15; no. 27; pp. 16 - 1611
Main Authors Zhang, Wei, Wang, Qinggong, Kong, Xiangtai, Xiong, Jiacheng, Ni, Shengkun, Cao, Duanhua, Niu, Buying, Chen, Mingan, Li, Yameng, Zhang, Runze, Wang, Yitian, Zhang, Lehan, Li, Xutong, Xiong, Zhaoping, Shi, Qian, Huang, Ziming, Fu, Zunyun, Zheng, Mingyue
Format Journal Article
LanguageEnglish
Published Cambridge Royal Society of Chemistry 10.07.2024
The Royal Society of Chemistry
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Extracting knowledge from complex and diverse chemical texts is a pivotal task for both experimental and computational chemists. The task is still considered to be extremely challenging due to the complexity of the chemical language and scientific literature. This study explored the power of fine-tuned large language models (LLMs) on five intricate chemical text mining tasks: compound entity recognition, reaction role labelling, metal-organic framework (MOF) synthesis information extraction, nuclear magnetic resonance spectroscopy (NMR) data extraction, and the conversion of reaction paragraphs to action sequences. The fine-tuned LLMs demonstrated impressive performance, significantly reducing the need for repetitive and extensive prompt engineering experiments. For comparison, we guided ChatGPT (GPT-3.5-turbo) and GPT-4 with prompt engineering and fine-tuned GPT-3.5-turbo as well as other open-source LLMs such as Mistral, Llama3, Llama2, T5, and BART. The results showed that the fine-tuned ChatGPT models excelled in all tasks. They achieved exact accuracy levels ranging from 69% to 95% on these tasks with minimal annotated data. They even outperformed those task-adaptive pre-training and fine-tuning models that were based on a significantly larger amount of in-domain data. Notably, fine-tuned Mistral and Llama3 show competitive abilities. Given their versatility, robustness, and low-code capability, leveraging fine-tuned LLMs as flexible and effective toolkits for automated data acquisition could revolutionize chemical knowledge extraction. Extracting knowledge from complex chemical texts is essential for both experimental and computational chemists. Fine-tuned large language models (LLMs) can serve as flexible and effective extractors for automated data acquisition.
Bibliography:https://doi.org/10.1039/d4sc00924j
Electronic supplementary information (ESI) available. See DOI
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
These authors contributed equally to this work.
ISSN:2041-6520
2041-6539
DOI:10.1039/d4sc00924j