Highlighting Learning Pathways for Enhanced Course Recommendation Using Large Language Models

In course recommendation systems, students' enrollment behaviors are often influenced by their individualized learning pathways, such as enrolling in courses like C++ and Java for computer science, alongside English Speaking and English Writing for language learning. To capture these personaliz...

Full description

Saved in:
Bibliographic Details
Published inInternational Conference on Systems and Informatics pp. 1 - 8
Main Authors Xiao, Zheng, Hu, Wenxin, Huang, Xinya
Format Conference Proceeding
LanguageEnglish
Published IEEE 14.12.2024
Subjects
Online AccessGet full text
ISSN2689-7148
DOI10.1109/ICSAI65059.2024.10893789

Cover

Loading…
More Information
Summary:In course recommendation systems, students' enrollment behaviors are often influenced by their individualized learning pathways, such as enrolling in courses like C++ and Java for computer science, alongside English Speaking and English Writing for language learning. To capture these personalized learning preferences, existing sequential recommendation methods commonly employ sliding window techniques to extract subsequences, treating the task similarly to point-of-interest recommendations. However, these methods face limitations due to: (1) neglecting diverse individualized learning pathways, which are strong indicators of students' learning preferences; (2) misalignment between training and inference processes, as they treat all subsequences uniformly during inference despite training them separately. To address these challenges, we propose a novel model called LPR (Learning Pathways Recommender), which leverages large language models to extract coherent learning pathways from students' course enrollment histories. Specifically, our model comprises two key components: (1) a Learning Pathway-Aware Subsequence Extraction Module that utilizes large language models (LLMs) to extract logically structured and contextually relevant subsequences; (2) a Multi-Stream Prediction Module that aggregates representations across multiple subsequences, preserving the logical structure of the pathways during both training and inference. Extensive experiments conducted on the MoocCube dataset demonstrate that LPR outperforms state-of-the-art baselines, highlighting its effectiveness in capturing and leveraging the logical flow in students' learning pathways.
ISSN:2689-7148
DOI:10.1109/ICSAI65059.2024.10893789