Large language models are few-shot multivariate time series classifiers

Large Language Models (LLMs) are widely applied in time series analysis. Yet, their utility in few-shot classification—a scenario with limited training data—remains unexplored. We aim to leverage the pre-trained knowledge in LLMs to overcome the data scarcity problem within multivariate time series....

Full description

Saved in:
Bibliographic Details
Published inData mining and knowledge discovery Vol. 39; no. 5; p. 66
Main Authors Chen, Yakun, Li, Zihao, Yang, Chao, Wang, Xianzhi, Xu, Guandong
Format Journal Article
LanguageEnglish
Published New York Springer US 01.09.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1384-5810
1573-756X
DOI10.1007/s10618-025-01145-z

Cover

Loading…
More Information
Summary:Large Language Models (LLMs) are widely applied in time series analysis. Yet, their utility in few-shot classification—a scenario with limited training data—remains unexplored. We aim to leverage the pre-trained knowledge in LLMs to overcome the data scarcity problem within multivariate time series. To this end, we propose LLMFew , an LLM-enhanced framework, to investigate the feasibility and capacity of LLMs for few-shot multivariate time series classification (MTSC). We first introduce a P atch-wise T emporal C onvolution Enc oder (PTCEnc) to align time series data with the textual embedding input of LLMs. Then, we fine-tune the pre-trained LLM decoder with Low-rank Adaptations (LoRA) to enable effective representation learning from time series data. Experimental results show our model consistently outperforms state-of-the-art baselines by a large margin, achieving 125.2% and 50.2% improvement in classification accuracy on Handwriting and EthanolConcentration datasets, respectively. Our results also show LLM-based methods achieve comparable performance to traditional models across various datasets in few-shot MTSC, paving the way for applying LLMs in practical scenarios where labeled data are limited. Our code is available at https://github.com/junekchen/llm-fewshot-mtsc .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1384-5810
1573-756X
DOI:10.1007/s10618-025-01145-z