RL-TweetGen: A Socio-Technical Framework for Engagement-Optimized Short Text Generation in Digital Commerce Using Large Language Models and Reinforcement Learning
In the rapidly evolving landscape of digital marketing and electronic commerce, short-form content—particularly on platforms like Twitter (now X)—has become pivotal for real-time branding, community engagement, and product promotion. The rise of Non-Fungible Tokens (NFTs) and Web3 ecosystems further...
Saved in:
Published in | Journal of theoretical and applied electronic commerce research Vol. 20; no. 3; p. 218 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
26.08.2025
|
Online Access | Get full text |
ISSN | 0718-1876 0718-1876 |
DOI | 10.3390/jtaer20030218 |
Cover
Loading…
Summary: | In the rapidly evolving landscape of digital marketing and electronic commerce, short-form content—particularly on platforms like Twitter (now X)—has become pivotal for real-time branding, community engagement, and product promotion. The rise of Non-Fungible Tokens (NFTs) and Web3 ecosystems further underscores the need for domain-specific, engagement-oriented social media content. However, automating the generation of such content while balancing linguistic quality, semantic relevance, and audience engagement remains a substantial challenge. To address this, we propose RL-TweetGen, a socio-technical framework that integrates instruction-tuned large language models (LLMs) with reinforcement learning (RL) to generate concise, impactful, and engagement-optimized tweets. The framework incorporates a structured pipeline comprising domain-specific data curation, semantic classification, and intent-aware prompt engineering, and leverages Parameter-Efficient Fine-Tuning (PEFT) with LoRA for scalable model adaptation. We fine-tuned and evaluated three LLMs—LLaMA-3.1-8B, Mistral-7B Instruct, and DeepSeek 7B Chat—guided by a hybrid reward function that blends XGBoost-predicted engagement scores with expert-in-the-loop feedback. To enhance lexical diversity and contextual alignment, we implemented advanced decoding strategies, including Tailored Beam Search, Enhanced Top-p Sampling, and Contextual Temperature Scaling. A case study focused on NFT-related tweet generation demonstrated the practical effectiveness of RL-TweetGen. Experimental results showed that Mistral-7B achieved the highest lexical fluency (BLEU: 0.2285), LLaMA-3.1 exhibited superior semantic precision (BERT-F1: 0.8155), while DeepSeek 7B provided balanced performance. Overall, RL-TweetGen presents a scalable and adaptive solution for marketers, content strategists, and Web3 platforms seeking to automate and optimize social media engagement. The framework advances the role of generative AI in digital commerce by aligning content generation with platform dynamics, user preferences, and marketing goals. |
---|---|
ISSN: | 0718-1876 0718-1876 |
DOI: | 10.3390/jtaer20030218 |